I live in the physical world. For a computer program to kill me, it has to have power over the physical world and some physical mechanism to do that. So, anyone claiming that AI is going to destroy humanity needs to explain the physical mechanism by which that will happen. This article like every other one I have seen making that argument fails to do that.
No it doesn’t. It is just more of the same nonsense. “AI could defeat all of humanity” but never explains how that happens. I think what is going on here is very intelligent people are thinking about these things. Being intelligent, their blind spot is to grossly over estimate the importance of raw intelligence. So, they AI as being more intelligent than all of humanity and then immediately assume that means it will defeat and enslave humanity as if intelligence were the only thing that mattered. It isn’t the only thing that matters. The physical and brute force matters too. Smart people have a bad habit of forgetting that.
I don’t think reasoning about others’ beliefs and thoughts is helping you be correct about the world here. Can you instead try to engage with the arguments themselves and point out at what step you you don’t see a concrete way for that to happen ? You don’t show much sign of having read the article so I’ll copy paste the part with explanations of how AIs start acting in the physical space.
In this scenario, the AIs face a challenge: if it becomes obvious to everyone that they are trying to defeat humanity, humans could attack or shut down a few concentrated areas where most of the servers are, and hence drastically reduce AIs’ numbers. So the AIs need a way of getting one or more “AI headquarters”: property they control where they can safely operate servers and factories, do research, make plans and construct robots/drones/other military equipment.
Their goal is ultimately to have enough AIs, robots, etc. to be able to defeat the rest of humanity combined. This might mean constructing overwhelming amounts of military equipment, or thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others’ equipment, or researching and deploying extremely powerful weapons (e.g., bioweapons), or a combination.
Here are some ways they could get to that point:
They could recruit human allies through many different methods—manipulation, deception, blackmail and other threats, genuine promises along the lines of “We’re probably going to end up in charge somehow, and we’ll treat you better when we do.”
Human allies could be given valuable intellectual property (developed by AIs), given instructions for making lots of money, and asked to rent their own servers and acquire their own property where an “AI headquarters” can be set up. Since the “AI headquarters” would officially be human property, it could be very hard for authorities to detect and respond to the danger.
Via threats, AIs might be able to get key humans to cooperate with them—such as political leaders, or the CEOs of companies running lots of AIs. This would open up further strategies.
As assumed above, particular companies are running huge numbers of AIs. The AIs being run by these companies might find security holes in the companies’ servers (this isn’t the topic of this piece, but my general impression is that security holes are widespread and that reasonably competent people can find many of them)15, and thereby might find opportunities to create durable “fakery” about what they’re up to.
E.g., they might set things up so that as far as humans can tell, it looks like all of the AI systems are hard at work creating profit-making opportunities for the company, when in fact they’re essentially using the server farm as their headquarters—and/or trying to establish a headquarters somewhere else (by recruiting human allies, sending money to outside bank accounts, using that money to acquire property and servers, etc.)
If AIs are in wide enough use, they might already be operating lots of drones and other military equipment, in which case it could be pretty straightforward to be able to defend some piece of territory—or to strike a deal with some government to enlist its help in doing so.
AIs could mix-and-match the above methods and others: for example, creating “fakery” long enough to recruit some key human allies, then attempting to threaten and control humans in key positions of power to the point where they control solid amounts of military resources, then using this to establish a “headquarters.”
So is there anything here you don’t think is possible ? Getting human allies ? Being in control of large sums of compute while staying undercover ? Doing science, and getting human contractors/allies to produce the results ? etc
One likely way AI kills humanity is indirectly, by simply outcompeting us. They become more intelligent, their consciousness is recognized in at least some jurisdictions, those jurisdictions experience rapid unprecedented technological and economic growth and become the new superpowers, less and less of world GDP goes to humans, we diminish.
One of the simplest ways for AI to have power over the physical world is via humans as pawns. A reasonably savvy AI could persuade/manipulate/coerce/extort/blackmail real-life people to carry out the things it needs help with. Imagine a powerful mob boss who is superintelligent, never sleeps, and continuously monitors everyone in their network.
For superintelligent AI, it will be trivial to orchestrate engineered superpandemics that will kill 90+% of people, finishing off the disorganised rest will be easy.
Oh really? Will it have the ability to run an entire lab robotically to do that? If not, then it won’t be the AI doing anything. It will be the people doing it. Its power to do anything in the physical world only exists to the extent humans are willing to grant it.
There are can order at least 10k-basepair DNA synthesis online, longer sequences are “call to get a quote” on the sites I found. The smallest synthetic genome for a viable self-replicating bacterium is 531kb. The genome for a virus would be even smaller.
My understanding is that there are existing processes to encapsulate genes into virus shells from other species for gene therapy purposes. That leaves the logistics of buying both services, hooking them up and getting the particles injected into some lab animals.
It doesn’t look trivial, but less complicated than buying an entire nuclear arsenal.
I live in the physical world. For a computer program to kill me, it has to have power over the physical world and some physical mechanism to do that. So, anyone claiming that AI is going to destroy humanity needs to explain the physical mechanism by which that will happen. This article like every other one I have seen making that argument fails to do that.
See if this one resonates with you: https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/
No it doesn’t. It is just more of the same nonsense. “AI could defeat all of humanity” but never explains how that happens. I think what is going on here is very intelligent people are thinking about these things. Being intelligent, their blind spot is to grossly over estimate the importance of raw intelligence. So, they AI as being more intelligent than all of humanity and then immediately assume that means it will defeat and enslave humanity as if intelligence were the only thing that mattered. It isn’t the only thing that matters. The physical and brute force matters too. Smart people have a bad habit of forgetting that.
I don’t think reasoning about others’ beliefs and thoughts is helping you be correct about the world here. Can you instead try to engage with the arguments themselves and point out at what step you you don’t see a concrete way for that to happen ?
You don’t show much sign of having read the article so I’ll copy paste the part with explanations of how AIs start acting in the physical space.
So is there anything here you don’t think is possible ?
Getting human allies ? Being in control of large sums of compute while staying undercover ? Doing science, and getting human contractors/allies to produce the results ? etc
The way you use intelligence is different from how many people here using that word mean it.
Check this out (for a partial understanding of what they mean): https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence
The commenter you’re responding to mentioned physical and brute force, so I don’t think the understanding of intelligence is the crux.
The can and the will are separate arguments, but the case has been made for both.
One likely way AI kills humanity is indirectly, by simply outcompeting us. They become more intelligent, their consciousness is recognized in at least some jurisdictions, those jurisdictions experience rapid unprecedented technological and economic growth and become the new superpowers, less and less of world GDP goes to humans, we diminish.
One of the simplest ways for AI to have power over the physical world is via humans as pawns. A reasonably savvy AI could persuade/manipulate/coerce/extort/blackmail real-life people to carry out the things it needs help with. Imagine a powerful mob boss who is superintelligent, never sleeps, and continuously monitors everyone in their network.
For superintelligent AI, it will be trivial to orchestrate engineered superpandemics that will kill 90+% of people, finishing off the disorganised rest will be easy.
Oh really? Will it have the ability to run an entire lab robotically to do that? If not, then it won’t be the AI doing anything. It will be the people doing it. Its power to do anything in the physical world only exists to the extent humans are willing to grant it.
There are can order at least 10k-basepair DNA synthesis online, longer sequences are “call to get a quote” on the sites I found. The smallest synthetic genome for a viable self-replicating bacterium is 531kb. The genome for a virus would be even smaller.
My understanding is that there are existing processes to encapsulate genes into virus shells from other species for gene therapy purposes. That leaves the logistics of buying both services, hooking them up and getting the particles injected into some lab animals.
It doesn’t look trivial, but less complicated than buying an entire nuclear arsenal.