Okay, I’m not sure the argument breaks down, but my crux is that everyone else probably has an AGI, and my issue is similar to Richard Ngo’s issue with ARA: the people ordering ARA have far fewer resources to put into attack compared to the defense’s capability, and real-life wars, while advantaged to the attacker, isn’t so offense advantaged that defense is pointless:
The issue is that, if you can hide, you can amass resources exponentially once you hit self-replicating production facilities and fully recursively self-improving AGI. This almost completely shifts the logic of all previous conflicts.
The comment you link seems to be addressing a very different scenario than my primary concern. It’s addressing an attack from within human infrastructure, rather than outside. What I describe is often not considered, because it seems like the “far future” that we needn’t worry about yet. But that far future seems realistically to be a handful of years past human-level AGI that starts to rapidly develop new technologies like the robotics needed for an autonomous self-replicating production in remote locations.
Then it reduces to “I think the exponential growth of resources is avaliable to both the attackers and defense, such that even while everything is changing, the relative standing of the attack/defense balance doesn’t change.”
I think part of why I’m skeptical is the assumption that exponential growth is only useful for attack, or at least way more useful for attack, whereas I think exponentially growing resources by AI tech is way more symmetrical by default.
Ah—now I see your point. This will help me clarify my concern in future presentations, so thanks!
My concern is that a bad actor will be the first to go all-out exponential. Other, better humans in charge of AGI will be reluctant to turn the moon much less the earth into military/industrial production, and to upend the power structure of the world. The worst actors will, by default, be the first go full exponential and ruthlessly offensive.
Beyond that, I’m afraid the physics of the world does favor offense over defense. It’s pretty easy to release a lot of energy where you want it, and very hard to build anything that can withstand a nuke let alone a nova.
But the dynamics are more complex than that, of course. So I think the reality is unknown. My point is that this scenario deserves some more careful thought.
Yeah, it does deserve more careful thought, especially since I expect almost all of my probability mass on catastrophe to be human caused, and more importantly I still think that it’s an important enough problem that resources should go to thinking about it.
Okay, I’m not sure the argument breaks down, but my crux is that everyone else probably has an AGI, and my issue is similar to Richard Ngo’s issue with ARA: the people ordering ARA have far fewer resources to put into attack compared to the defense’s capability, and real-life wars, while advantaged to the attacker, isn’t so offense advantaged that defense is pointless:
https://www.lesswrong.com/posts/xiRfJApXGDRsQBhvc/we-might-be-dropping-the-ball-on-autonomous-replication-and-1#hXwGKTEQzRAcRYYBF
The issue is that, if you can hide, you can amass resources exponentially once you hit self-replicating production facilities and fully recursively self-improving AGI. This almost completely shifts the logic of all previous conflicts.
The comment you link seems to be addressing a very different scenario than my primary concern. It’s addressing an attack from within human infrastructure, rather than outside. What I describe is often not considered, because it seems like the “far future” that we needn’t worry about yet. But that far future seems realistically to be a handful of years past human-level AGI that starts to rapidly develop new technologies like the robotics needed for an autonomous self-replicating production in remote locations.
Then it reduces to “I think the exponential growth of resources is avaliable to both the attackers and defense, such that even while everything is changing, the relative standing of the attack/defense balance doesn’t change.”
I think part of why I’m skeptical is the assumption that exponential growth is only useful for attack, or at least way more useful for attack, whereas I think exponentially growing resources by AI tech is way more symmetrical by default.
Ah—now I see your point. This will help me clarify my concern in future presentations, so thanks!
My concern is that a bad actor will be the first to go all-out exponential. Other, better humans in charge of AGI will be reluctant to turn the moon much less the earth into military/industrial production, and to upend the power structure of the world. The worst actors will, by default, be the first go full exponential and ruthlessly offensive.
Beyond that, I’m afraid the physics of the world does favor offense over defense. It’s pretty easy to release a lot of energy where you want it, and very hard to build anything that can withstand a nuke let alone a nova.
But the dynamics are more complex than that, of course. So I think the reality is unknown. My point is that this scenario deserves some more careful thought.
Yeah, it does deserve more careful thought, especially since I expect almost all of my probability mass on catastrophe to be human caused, and more importantly I still think that it’s an important enough problem that resources should go to thinking about it.