Go worked because it’s a completely defined game with perfect information.
I am not claiming if you have the information you can’t arbitrarily scale intelligence. Though it will have diminishing returns—if you reframe the problem as fractional Go stones instead of “win” and “loss” you will see asymptotically diminishing returns on intelligence.
Meaning that for an infinite superintelligence limited in action only to Go moves, it still cannot beat an average human player if that player gets a sufficiently large amount of extra stones.
This example is from the real world where information is imperfect, and even for pinball—an almost perfectly deterministic game—the ceiling on the utility of intelligence dropped a lot.
For some tasks humans do the ceiling may be so low superintelligence isn’t even possible.
This has profound consequences. It means most AGI doom scenario are wrong. The machine cannot make a bioweapon to kill everyone or nanotechnology or anything else without first having the information needed to do so and the actuators. (Note that in the pinball example if you have a realistic robot on the pinball machine with a finite control packet rate you are at way under atomic precision)
This makes control of AGIs possible because you don’t need to understand the fire but the fuel supply.
I would point out that this is not a new observation, nor is it unknown to the people most concerned about many of those same doom scenarios. There are several reasons why, but most of them boil down to: the AI doesn’t need to beat physics, it just needs to beat humans. If you took the best human pinball player, and made it so they never got tired/distracted/sick, they’d win just about every competition they played in. If you also gave them high-precision internal timekeeping to decide when to use the flippers, even more so. Let them play multiple games in different places at once, and suddenly humans lose every pinball competition there is.
Also, you’re equivocating between “the ceiling may be so low” for “some tasks,” and “most AGI doom scenario are wrong.” Your analogy does not actually show where practical limits are for specific relevant real-word tasks, let alone set a bound for all relevant tasks.
For example, consider that humans can design an mRNA vaccine in 2 days. We may take a year or more to test, manufacture, and approve it, but we can use intelligence alone to design a mechanism to get our cells to manufacture arbitrary proteins. Do you really want to bet the planet that no level of intelligence can get from there to designing a self-replicating virus? And that we’re going to actually, successfully, every time, everywhere, forever, constrain AI so that it never persuades anyone to create something it wants based on instructions they don’t understand (whether or not they think they do)? The former seems like a small extension of what baseline humans can already do, and the latter seems like an unrealistic pipe dream. Not because your “extra Go stones to start with” point is necessarily wrong, but because humans who aren’t worried about doom scenarios are actively trying to give the AIs as many stones as they can.
Do you really want to bet the planet that no level of intelligence can get from there to designing a self-replicating virus
No, but I wouldn’t bet that solely based on computation and not wet lab experiments (a lot of them) the AI could design a self replicating virus that killed everyone. There are conflicting goals with that, if the virus kills people too fast, it dies out. Any “timer” mechanism delaying when it attacks is not evolutionarily conserved. The “virus” will simplify itself in each hosts, dropping any genes that are not helping it right now. (or it iterates the local possibility space and the particles that spread well to the next host are the ones that escape, this is why Covid would evolve to be harder to stop with masks)
That’s the delta. Sure, if the machine gets large amounts of wet labs it could develop arbitrary things. This problem isn’t unsolvable but the possible fixes depend on things humans don’t have data on. (think sophisticated protein based error correction mechanisms that stop the virus from being able to mutate, or give the virus DNA editing tools that leave a time bomb in the hosts and also weaken the immune system, that kinda thing)
That I think is the part that the doomsters lack, many of them simply have no knowledge about things outside their narrow domain (math, rationality, CS) and don’t even know what they don’t know. It’s outside their context window. I don’t claim to know it all either.
And that we’re going to actually, successfully, every time, everywhere, forever, constrain AI so that it never persuades anyone to create something it wants based on instructions they don’t understand (whether or not they think they do)
See the other main point of disagreement I have is how do you protect humans from this. I agree. Bioweapons are incredibly asymmetric, takes one release anywhere and it gets everywhere. And I don’t think futures where humans have restricted AGI research—say with “pauses” and onerous government restrictions on even experiments—is one where they survive. If they do not even know what AGIs do, how they fail, or develop controllable systems that can fight on their side, they will be helpless when these kinds of attacks become possible.
What would a defense look like? Basically hermetically sealed bunkers buried deep away from strategic targets, enough for the entire population of wealthier nations, remotely controlled surrogate robots, fleets of defense drones, nanotechnology sensors that can detect various forms of invisible attacks, millions drone submarines monitoring the ocean, defense satellites using lasers or microwave weapons, and so on.
What these things have in common is the amount of labor required to build the above on the needed scales is not available to humans. Each thing I mention “costs” a lot, the total “bill” is thousands of times the entire GDP of a wealthy country. The “cost” is from the human labor needed to mine the materials, design and engineer the device, to build the subcomponents, inspections, many of the things mentioned are too fragile to mass produce easily so there’s a lot of hand assembly, and so on.
Eliezer’s “demand” https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ is death I think. “shut it down” means no defense is possible, the above cannot be built without self replicating robotics, which require sufficiently general AI (substantially more capable than GPT-4) to cover the breadth of tasks needed to mine materials and manufacturing more robotics and other things on the complexity of a robot. (this is why we don’t have self repl right now, it’s an endless 10% problem. Without general intelligence every robot has to be programmed essentially by hand, or for a restricted task space (see Amazon and a few others making robotics that are general for limited tasks like picking). And they will encounter “10% of the time” errors extremely often, requiring human labor to intervene. So most tasks today are still done with a lot of human labor, robotics only apply to the most common tasks)
Go worked because it’s a completely defined game with perfect information.
I am not claiming if you have the information you can’t arbitrarily scale intelligence. Though it will have diminishing returns—if you reframe the problem as fractional Go stones instead of “win” and “loss” you will see asymptotically diminishing returns on intelligence.
Meaning that for an infinite superintelligence limited in action only to Go moves, it still cannot beat an average human player if that player gets a sufficiently large amount of extra stones.
This example is from the real world where information is imperfect, and even for pinball—an almost perfectly deterministic game—the ceiling on the utility of intelligence dropped a lot.
For some tasks humans do the ceiling may be so low superintelligence isn’t even possible.
This has profound consequences. It means most AGI doom scenario are wrong. The machine cannot make a bioweapon to kill everyone or nanotechnology or anything else without first having the information needed to do so and the actuators. (Note that in the pinball example if you have a realistic robot on the pinball machine with a finite control packet rate you are at way under atomic precision)
This makes control of AGIs possible because you don’t need to understand the fire but the fuel supply.
I would point out that this is not a new observation, nor is it unknown to the people most concerned about many of those same doom scenarios. There are several reasons why, but most of them boil down to: the AI doesn’t need to beat physics, it just needs to beat humans. If you took the best human pinball player, and made it so they never got tired/distracted/sick, they’d win just about every competition they played in. If you also gave them high-precision internal timekeeping to decide when to use the flippers, even more so. Let them play multiple games in different places at once, and suddenly humans lose every pinball competition there is.
Also, you’re equivocating between “the ceiling may be so low” for “some tasks,” and “most AGI doom scenario are wrong.” Your analogy does not actually show where practical limits are for specific relevant real-word tasks, let alone set a bound for all relevant tasks.
For example, consider that humans can design an mRNA vaccine in 2 days. We may take a year or more to test, manufacture, and approve it, but we can use intelligence alone to design a mechanism to get our cells to manufacture arbitrary proteins. Do you really want to bet the planet that no level of intelligence can get from there to designing a self-replicating virus? And that we’re going to actually, successfully, every time, everywhere, forever, constrain AI so that it never persuades anyone to create something it wants based on instructions they don’t understand (whether or not they think they do)? The former seems like a small extension of what baseline humans can already do, and the latter seems like an unrealistic pipe dream. Not because your “extra Go stones to start with” point is necessarily wrong, but because humans who aren’t worried about doom scenarios are actively trying to give the AIs as many stones as they can.
Do you really want to bet the planet that no level of intelligence can get from there to designing a self-replicating virus
No, but I wouldn’t bet that solely based on computation and not wet lab experiments (a lot of them) the AI could design a self replicating virus that killed everyone. There are conflicting goals with that, if the virus kills people too fast, it dies out. Any “timer” mechanism delaying when it attacks is not evolutionarily conserved. The “virus” will simplify itself in each hosts, dropping any genes that are not helping it right now. (or it iterates the local possibility space and the particles that spread well to the next host are the ones that escape, this is why Covid would evolve to be harder to stop with masks)
That’s the delta. Sure, if the machine gets large amounts of wet labs it could develop arbitrary things. This problem isn’t unsolvable but the possible fixes depend on things humans don’t have data on. (think sophisticated protein based error correction mechanisms that stop the virus from being able to mutate, or give the virus DNA editing tools that leave a time bomb in the hosts and also weaken the immune system, that kinda thing)
That I think is the part that the doomsters lack, many of them simply have no knowledge about things outside their narrow domain (math, rationality, CS) and don’t even know what they don’t know. It’s outside their context window. I don’t claim to know it all either.
And that we’re going to actually, successfully, every time, everywhere, forever, constrain AI so that it never persuades anyone to create something it wants based on instructions they don’t understand (whether or not they think they do)
See the other main point of disagreement I have is how do you protect humans from this. I agree. Bioweapons are incredibly asymmetric, takes one release anywhere and it gets everywhere. And I don’t think futures where humans have restricted AGI research—say with “pauses” and onerous government restrictions on even experiments—is one where they survive. If they do not even know what AGIs do, how they fail, or develop controllable systems that can fight on their side, they will be helpless when these kinds of attacks become possible.
What would a defense look like? Basically hermetically sealed bunkers buried deep away from strategic targets, enough for the entire population of wealthier nations, remotely controlled surrogate robots, fleets of defense drones, nanotechnology sensors that can detect various forms of invisible attacks, millions drone submarines monitoring the ocean, defense satellites using lasers or microwave weapons, and so on.
What these things have in common is the amount of labor required to build the above on the needed scales is not available to humans. Each thing I mention “costs” a lot, the total “bill” is thousands of times the entire GDP of a wealthy country. The “cost” is from the human labor needed to mine the materials, design and engineer the device, to build the subcomponents, inspections, many of the things mentioned are too fragile to mass produce easily so there’s a lot of hand assembly, and so on.
Eliezer’s “demand” https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ is death I think. “shut it down” means no defense is possible, the above cannot be built without self replicating robotics, which require sufficiently general AI (substantially more capable than GPT-4) to cover the breadth of tasks needed to mine materials and manufacturing more robotics and other things on the complexity of a robot. (this is why we don’t have self repl right now, it’s an endless 10% problem. Without general intelligence every robot has to be programmed essentially by hand, or for a restricted task space (see Amazon and a few others making robotics that are general for limited tasks like picking). And they will encounter “10% of the time” errors extremely often, requiring human labor to intervene. So most tasks today are still done with a lot of human labor, robotics only apply to the most common tasks)