Then it reduces to “I think the exponential growth of resources is avaliable to both the attackers and defense, such that even while everything is changing, the relative standing of the attack/defense balance doesn’t change.”
I think part of why I’m skeptical is the assumption that exponential growth is only useful for attack, or at least way more useful for attack, whereas I think exponentially growing resources by AI tech is way more symmetrical by default.
Ah—now I see your point. This will help me clarify my concern in future presentations, so thanks!
My concern is that a bad actor will be the first to go all-out exponential. Other, better humans in charge of AGI will be reluctant to turn the moon much less the earth into military/industrial production, and to upend the power structure of the world. The worst actors will, by default, be the first go full exponential and ruthlessly offensive.
Beyond that, I’m afraid the physics of the world does favor offense over defense. It’s pretty easy to release a lot of energy where you want it, and very hard to build anything that can withstand a nuke let alone a nova.
But the dynamics are more complex than that, of course. So I think the reality is unknown. My point is that this scenario deserves some more careful thought.
Yeah, it does deserve more careful thought, especially since I expect almost all of my probability mass on catastrophe to be human caused, and more importantly I still think that it’s an important enough problem that resources should go to thinking about it.
Then it reduces to “I think the exponential growth of resources is avaliable to both the attackers and defense, such that even while everything is changing, the relative standing of the attack/defense balance doesn’t change.”
I think part of why I’m skeptical is the assumption that exponential growth is only useful for attack, or at least way more useful for attack, whereas I think exponentially growing resources by AI tech is way more symmetrical by default.
Ah—now I see your point. This will help me clarify my concern in future presentations, so thanks!
My concern is that a bad actor will be the first to go all-out exponential. Other, better humans in charge of AGI will be reluctant to turn the moon much less the earth into military/industrial production, and to upend the power structure of the world. The worst actors will, by default, be the first go full exponential and ruthlessly offensive.
Beyond that, I’m afraid the physics of the world does favor offense over defense. It’s pretty easy to release a lot of energy where you want it, and very hard to build anything that can withstand a nuke let alone a nova.
But the dynamics are more complex than that, of course. So I think the reality is unknown. My point is that this scenario deserves some more careful thought.
Yeah, it does deserve more careful thought, especially since I expect almost all of my probability mass on catastrophe to be human caused, and more importantly I still think that it’s an important enough problem that resources should go to thinking about it.