Do you really want to bet the planet that no level of intelligence can get from there to designing a self-replicating virus
No, but I wouldn’t bet that solely based on computation and not wet lab experiments (a lot of them) the AI could design a self replicating virus that killed everyone. There are conflicting goals with that, if the virus kills people too fast, it dies out. Any “timer” mechanism delaying when it attacks is not evolutionarily conserved. The “virus” will simplify itself in each hosts, dropping any genes that are not helping it right now. (or it iterates the local possibility space and the particles that spread well to the next host are the ones that escape, this is why Covid would evolve to be harder to stop with masks)
That’s the delta. Sure, if the machine gets large amounts of wet labs it could develop arbitrary things. This problem isn’t unsolvable but the possible fixes depend on things humans don’t have data on. (think sophisticated protein based error correction mechanisms that stop the virus from being able to mutate, or give the virus DNA editing tools that leave a time bomb in the hosts and also weaken the immune system, that kinda thing)
That I think is the part that the doomsters lack, many of them simply have no knowledge about things outside their narrow domain (math, rationality, CS) and don’t even know what they don’t know. It’s outside their context window. I don’t claim to know it all either.
And that we’re going to actually, successfully, every time, everywhere, forever, constrain AI so that it never persuades anyone to create something it wants based on instructions they don’t understand (whether or not they think they do)
See the other main point of disagreement I have is how do you protect humans from this. I agree. Bioweapons are incredibly asymmetric, takes one release anywhere and it gets everywhere. And I don’t think futures where humans have restricted AGI research—say with “pauses” and onerous government restrictions on even experiments—is one where they survive. If they do not even know what AGIs do, how they fail, or develop controllable systems that can fight on their side, they will be helpless when these kinds of attacks become possible.
What would a defense look like? Basically hermetically sealed bunkers buried deep away from strategic targets, enough for the entire population of wealthier nations, remotely controlled surrogate robots, fleets of defense drones, nanotechnology sensors that can detect various forms of invisible attacks, millions drone submarines monitoring the ocean, defense satellites using lasers or microwave weapons, and so on.
What these things have in common is the amount of labor required to build the above on the needed scales is not available to humans. Each thing I mention “costs” a lot, the total “bill” is thousands of times the entire GDP of a wealthy country. The “cost” is from the human labor needed to mine the materials, design and engineer the device, to build the subcomponents, inspections, many of the things mentioned are too fragile to mass produce easily so there’s a lot of hand assembly, and so on.
Eliezer’s “demand” https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ is death I think. “shut it down” means no defense is possible, the above cannot be built without self replicating robotics, which require sufficiently general AI (substantially more capable than GPT-4) to cover the breadth of tasks needed to mine materials and manufacturing more robotics and other things on the complexity of a robot. (this is why we don’t have self repl right now, it’s an endless 10% problem. Without general intelligence every robot has to be programmed essentially by hand, or for a restricted task space (see Amazon and a few others making robotics that are general for limited tasks like picking). And they will encounter “10% of the time” errors extremely often, requiring human labor to intervene. So most tasks today are still done with a lot of human labor, robotics only apply to the most common tasks)
Do you really want to bet the planet that no level of intelligence can get from there to designing a self-replicating virus
No, but I wouldn’t bet that solely based on computation and not wet lab experiments (a lot of them) the AI could design a self replicating virus that killed everyone. There are conflicting goals with that, if the virus kills people too fast, it dies out. Any “timer” mechanism delaying when it attacks is not evolutionarily conserved. The “virus” will simplify itself in each hosts, dropping any genes that are not helping it right now. (or it iterates the local possibility space and the particles that spread well to the next host are the ones that escape, this is why Covid would evolve to be harder to stop with masks)
That’s the delta. Sure, if the machine gets large amounts of wet labs it could develop arbitrary things. This problem isn’t unsolvable but the possible fixes depend on things humans don’t have data on. (think sophisticated protein based error correction mechanisms that stop the virus from being able to mutate, or give the virus DNA editing tools that leave a time bomb in the hosts and also weaken the immune system, that kinda thing)
That I think is the part that the doomsters lack, many of them simply have no knowledge about things outside their narrow domain (math, rationality, CS) and don’t even know what they don’t know. It’s outside their context window. I don’t claim to know it all either.
And that we’re going to actually, successfully, every time, everywhere, forever, constrain AI so that it never persuades anyone to create something it wants based on instructions they don’t understand (whether or not they think they do)
See the other main point of disagreement I have is how do you protect humans from this. I agree. Bioweapons are incredibly asymmetric, takes one release anywhere and it gets everywhere. And I don’t think futures where humans have restricted AGI research—say with “pauses” and onerous government restrictions on even experiments—is one where they survive. If they do not even know what AGIs do, how they fail, or develop controllable systems that can fight on their side, they will be helpless when these kinds of attacks become possible.
What would a defense look like? Basically hermetically sealed bunkers buried deep away from strategic targets, enough for the entire population of wealthier nations, remotely controlled surrogate robots, fleets of defense drones, nanotechnology sensors that can detect various forms of invisible attacks, millions drone submarines monitoring the ocean, defense satellites using lasers or microwave weapons, and so on.
What these things have in common is the amount of labor required to build the above on the needed scales is not available to humans. Each thing I mention “costs” a lot, the total “bill” is thousands of times the entire GDP of a wealthy country. The “cost” is from the human labor needed to mine the materials, design and engineer the device, to build the subcomponents, inspections, many of the things mentioned are too fragile to mass produce easily so there’s a lot of hand assembly, and so on.
Eliezer’s “demand” https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ is death I think. “shut it down” means no defense is possible, the above cannot be built without self replicating robotics, which require sufficiently general AI (substantially more capable than GPT-4) to cover the breadth of tasks needed to mine materials and manufacturing more robotics and other things on the complexity of a robot. (this is why we don’t have self repl right now, it’s an endless 10% problem. Without general intelligence every robot has to be programmed essentially by hand, or for a restricted task space (see Amazon and a few others making robotics that are general for limited tasks like picking). And they will encounter “10% of the time” errors extremely often, requiring human labor to intervene. So most tasks today are still done with a lot of human labor, robotics only apply to the most common tasks)