I’m not sure most people would have a near-zero chance of getting anywhere.
If AGI researchers took physical security super seriously, I bet this would make a malicious actors quite unlikely to succeed. But it doesn’t seem like they’re doing this right now, and I’m not sure they will start.
Theft, extortion, hacking, eavesdropping, and building botnets are things a normal person could do, so I don’t see why they wouldn’t have a fighting chance. I’ve been thinking about how someone could currently acquire private code from Google or some other current organization working on AI, and it sounds pretty plausible to me. I’m a little reluctant to go into details here due to informational hazards.
What do you think the difficulties would that make most people have a near-zero chance of getting anywhere? Is it from the difficulty in acquiring the code for the AGI? Or getting a mass of hacked computers big enough to compete with AGI researchers? Both seem pretty possible to me for a dedicated individual.
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
It’s important to remember that there may be quite a few people who would act somewhat maliciously if they took control of AGI, but I best the vast majority of these people would never even consider trying to take control of the world. I think trying to control AGI would just be far too much work and risk for the vast majority of people who want to cause suffering.
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
I think that a typical, highly motivated malicious actor would have much higher than 1% probability of succeeding. (If mainstream AI research starts taking security against malicious actors super seriously, the probability of the malicious actors’ success would be very low, but I’m not sure it will be taken seriously enough.)
A person might not know how to hack, building botnets, or eavesdrop, but they could learn. I think a motivated, reasonably capable individual would be able to become proficient in all those things. And they potentially will have decades of training before they would need to use it.