No it doesn’t. It is just more of the same nonsense. “AI could defeat all of humanity” but never explains how that happens. I think what is going on here is very intelligent people are thinking about these things. Being intelligent, their blind spot is to grossly over estimate the importance of raw intelligence. So, they AI as being more intelligent than all of humanity and then immediately assume that means it will defeat and enslave humanity as if intelligence were the only thing that mattered. It isn’t the only thing that matters. The physical and brute force matters too. Smart people have a bad habit of forgetting that.
I don’t think reasoning about others’ beliefs and thoughts is helping you be correct about the world here. Can you instead try to engage with the arguments themselves and point out at what step you you don’t see a concrete way for that to happen ? You don’t show much sign of having read the article so I’ll copy paste the part with explanations of how AIs start acting in the physical space.
In this scenario, the AIs face a challenge: if it becomes obvious to everyone that they are trying to defeat humanity, humans could attack or shut down a few concentrated areas where most of the servers are, and hence drastically reduce AIs’ numbers. So the AIs need a way of getting one or more “AI headquarters”: property they control where they can safely operate servers and factories, do research, make plans and construct robots/drones/other military equipment.
Their goal is ultimately to have enough AIs, robots, etc. to be able to defeat the rest of humanity combined. This might mean constructing overwhelming amounts of military equipment, or thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others’ equipment, or researching and deploying extremely powerful weapons (e.g., bioweapons), or a combination.
Here are some ways they could get to that point:
They could recruit human allies through many different methods—manipulation, deception, blackmail and other threats, genuine promises along the lines of “We’re probably going to end up in charge somehow, and we’ll treat you better when we do.”
Human allies could be given valuable intellectual property (developed by AIs), given instructions for making lots of money, and asked to rent their own servers and acquire their own property where an “AI headquarters” can be set up. Since the “AI headquarters” would officially be human property, it could be very hard for authorities to detect and respond to the danger.
Via threats, AIs might be able to get key humans to cooperate with them—such as political leaders, or the CEOs of companies running lots of AIs. This would open up further strategies.
As assumed above, particular companies are running huge numbers of AIs. The AIs being run by these companies might find security holes in the companies’ servers (this isn’t the topic of this piece, but my general impression is that security holes are widespread and that reasonably competent people can find many of them)15, and thereby might find opportunities to create durable “fakery” about what they’re up to.
E.g., they might set things up so that as far as humans can tell, it looks like all of the AI systems are hard at work creating profit-making opportunities for the company, when in fact they’re essentially using the server farm as their headquarters—and/or trying to establish a headquarters somewhere else (by recruiting human allies, sending money to outside bank accounts, using that money to acquire property and servers, etc.)
If AIs are in wide enough use, they might already be operating lots of drones and other military equipment, in which case it could be pretty straightforward to be able to defend some piece of territory—or to strike a deal with some government to enlist its help in doing so.
AIs could mix-and-match the above methods and others: for example, creating “fakery” long enough to recruit some key human allies, then attempting to threaten and control humans in key positions of power to the point where they control solid amounts of military resources, then using this to establish a “headquarters.”
So is there anything here you don’t think is possible ? Getting human allies ? Being in control of large sums of compute while staying undercover ? Doing science, and getting human contractors/allies to produce the results ? etc
See if this one resonates with you: https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/
No it doesn’t. It is just more of the same nonsense. “AI could defeat all of humanity” but never explains how that happens. I think what is going on here is very intelligent people are thinking about these things. Being intelligent, their blind spot is to grossly over estimate the importance of raw intelligence. So, they AI as being more intelligent than all of humanity and then immediately assume that means it will defeat and enslave humanity as if intelligence were the only thing that mattered. It isn’t the only thing that matters. The physical and brute force matters too. Smart people have a bad habit of forgetting that.
I don’t think reasoning about others’ beliefs and thoughts is helping you be correct about the world here. Can you instead try to engage with the arguments themselves and point out at what step you you don’t see a concrete way for that to happen ?
You don’t show much sign of having read the article so I’ll copy paste the part with explanations of how AIs start acting in the physical space.
So is there anything here you don’t think is possible ?
Getting human allies ? Being in control of large sums of compute while staying undercover ? Doing science, and getting human contractors/allies to produce the results ? etc
The way you use intelligence is different from how many people here using that word mean it.
Check this out (for a partial understanding of what they mean): https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence
The commenter you’re responding to mentioned physical and brute force, so I don’t think the understanding of intelligence is the crux.
The can and the will are separate arguments, but the case has been made for both.