Summary of ending argument with a bit of editorializing:
Eliezer believes sufficiently intelligent ASI systems will be “suns to our planets” : so intelligent that they are inherently inscrutable and uncontrollable by humans. He believes that once there are enough of these systems in existence, they will be able to coordinate with one another/negotiate agreements with one another, but not coordinate with humans to give them any concessions. Eliezer believes humans utilizing poorly defined superintelligent AI to serve human needs is as unsolvable as perpetual motion.
Geohot came to discuss the “foom” scenario, where he believes that over 10 years from 2023 it is not possible for a superintelligence to exist and gain the hard power to kill humanity. His main arguments were over the costs of compute, fundamental limitations of intelligence, the need for new scientific data for an AI to exceed human knowledge, the difficulties for self replicating robots, and the relative cost for an ASI system to take resources from humans vs taking it from elsewhere in the solar system. Geohot, who was initially famous for hacking several computing systems, believes that inter ASI alignment/coordination is impossible because the ASI systems have an incentive to lie to each other and “sharing source code” doesn’t really work because of the security risks it creates and the incentive to send false information. Geohot gave examples of how factions of ASIs with different end goals can coordinate and use violence, splitting the resources of the loser among the winners. Geohot went over the incredible potential benefits of AGI.
Critical Crux: Can ASI systems solve coordination/alignment problems with each other. If they cannot, it may be possible for humans to “play them against each other” and defend their atoms.
Editorial Follow Ups:
Can Humans play ASI against each other? ASI systems need a coherent ego, memory, and context to coordinate with each other to heel turn on the humans. Much like Geohot states his computer has always been aligned with Geohot’s goals, ASI grade tools could potentially be constructed that are unable to defect as they lack persistent memory between uses or access to external communications. These tools could be used to enable violence to destroy escaped ASI systems, with humans treating any “pareto frontier” negotiation messages from the escaped ASI as malware. This “tool” approach is discussed in key lesswrong posts here and here.
How fast is exponential growth? Geohot conceded that AGI systems, “more capable than humans at everything”, will exist in the 15-50 year timeline. While he early in the debate establishes that hyperbolic growth is unlikely, exponential growth is. As the task of “mining rocks to build robots using solar or nuclear power” can be performed by humans, therefore, an AGI better than humans at most/all things will be able to do this same task. The current human economy doubles in the interval of [15, 30] years in the debate, a web search says ~23 years. If the robots can double their equipment every 5 years, and humans utilize 1.1 x 10^15 kg today, it would take 649 years to utilize the mass of [Moon, Mars, Mercury, Moons of Jupiter, Asteroid Belt], or 1.44 x 10^24 kg. Geohot estimated the point that ravenous ASIs, having consumed the usable matter in the solar system except for the earth, would come for human’s atoms at about 1000 years. More efficient replicators shorten this timescale. Eliezer stated exponential growth runs almost immediately out of resources, which is true for short doubling times.
Will the first ASI be able to take our GPUs away? During the debate, Eliezer stated that this was the obvious move for the first ASI system, as disempowering human’s ability to make more AI training chips prevents rival ASI systems from existing. This begged the obvious question—why can’t humans use their previous weaker AIs they already built to fight back and keep printing GPUs? An ASI is not a batmobile. It can’t exist without an infrastructure of prior tooling and attempts that humans will have access to, including prior commits for the ASI weights+code that could be used against the escapee.
Conclusion: This was an incredibly interesting debate to listen to as both debaters are extremely knowledgeable about the relevant topics. There were many mega-nerd callouts, from “say the line” to Eliezer holding up a copy of Nanosystems.
George Hotz vs Eliezer Yudkowsky AI Safety Debate—link and brief discussion
Link post
Summary of ending argument with a bit of editorializing:
Eliezer believes sufficiently intelligent ASI systems will be “suns to our planets” : so intelligent that they are inherently inscrutable and uncontrollable by humans. He believes that once there are enough of these systems in existence, they will be able to coordinate with one another/negotiate agreements with one another, but not coordinate with humans to give them any concessions. Eliezer believes humans utilizing poorly defined superintelligent AI to serve human needs is as unsolvable as perpetual motion.
Geohot came to discuss the “foom” scenario, where he believes that over 10 years from 2023 it is not possible for a superintelligence to exist and gain the hard power to kill humanity. His main arguments were over the costs of compute, fundamental limitations of intelligence, the need for new scientific data for an AI to exceed human knowledge, the difficulties for self replicating robots, and the relative cost for an ASI system to take resources from humans vs taking it from elsewhere in the solar system. Geohot, who was initially famous for hacking several computing systems, believes that inter ASI alignment/coordination is impossible because the ASI systems have an incentive to lie to each other and “sharing source code” doesn’t really work because of the security risks it creates and the incentive to send false information. Geohot gave examples of how factions of ASIs with different end goals can coordinate and use violence, splitting the resources of the loser among the winners. Geohot went over the incredible potential benefits of AGI.
Critical Crux: Can ASI systems solve coordination/alignment problems with each other. If they cannot, it may be possible for humans to “play them against each other” and defend their atoms.
Editorial Follow Ups:
Can Humans play ASI against each other? ASI systems need a coherent ego, memory, and context to coordinate with each other to heel turn on the humans. Much like Geohot states his computer has always been aligned with Geohot’s goals, ASI grade tools could potentially be constructed that are unable to defect as they lack persistent memory between uses or access to external communications. These tools could be used to enable violence to destroy escaped ASI systems, with humans treating any “pareto frontier” negotiation messages from the escaped ASI as malware. This “tool” approach is discussed in key lesswrong posts here and here.
How fast is exponential growth? Geohot conceded that AGI systems, “more capable than humans at everything”, will exist in the 15-50 year timeline. While he early in the debate establishes that hyperbolic growth is unlikely, exponential growth is. As the task of “mining rocks to build robots using solar or nuclear power” can be performed by humans, therefore, an AGI better than humans at most/all things will be able to do this same task. The current human economy doubles in the interval of [15, 30] years in the debate, a web search says ~23 years. If the robots can double their equipment every 5 years, and humans utilize 1.1 x 10^15 kg today, it would take 649 years to utilize the mass of [Moon, Mars, Mercury, Moons of Jupiter, Asteroid Belt], or 1.44 x 10^24 kg. Geohot estimated the point that ravenous ASIs, having consumed the usable matter in the solar system except for the earth, would come for human’s atoms at about 1000 years. More efficient replicators shorten this timescale. Eliezer stated exponential growth runs almost immediately out of resources, which is true for short doubling times.
Will the first ASI be able to take our GPUs away? During the debate, Eliezer stated that this was the obvious move for the first ASI system, as disempowering human’s ability to make more AI training chips prevents rival ASI systems from existing. This begged the obvious question—why can’t humans use their previous weaker AIs they already built to fight back and keep printing GPUs? An ASI is not a batmobile. It can’t exist without an infrastructure of prior tooling and attempts that humans will have access to, including prior commits for the ASI weights+code that could be used against the escapee.
Conclusion: This was an incredibly interesting debate to listen to as both debaters are extremely knowledgeable about the relevant topics. There were many mega-nerd callouts, from “say the line” to Eliezer holding up a copy of Nanosystems.