I think the strongest “historical” argument is the concept of quenching/entropy/nature abhores complex systems.
What I mean by this is that generations of humans before us, through blood sweat and tears, have built many forms of machine. And they all have flaws and when you run them at full power they eventually fall apart. Quite a bit early in the prototype phase.
And generations of hypesters have over promised as well. This time will be different, this won’t fail, this is safe, the world is about to change. Almost all their proclamations were falsified.
A rampant ASI is this machine you built. And instead of just leaving it’s operating bounds and failing—which I mean all kinda stupid things could end the ASIs run instantly like a segfault that kills every single copy at the same time because a table for tracking peers ran out of memory or similar—we’re predicting it starts as this seed, badly outmatched by the humans and their tools and weapons. It’s so smart it stealthily acquires resources and develops technology and humans are helpless to stop it and it doesn’t fail spontaneously from faults in the software it runs on. Or satisfying it’s value function and shutting down. And its so smart it finds some asymmetry—some way to win against overwhelming odds. And it kills the humans, and takes over the universe, and from the perspective of alien observers they see all the stars dim from Dyson swarms capturing some of the light in an expanding sphere.
Can this happen? Yes. The weight of time and prior examples makes it seem unlikely though. (The weight of time is that it’s about 14 billion years from the hypothesized beginning of the universe and we observe no Dyson swarms and we exist)
It may not BE unlikely. Though the inverse case—having high confidence that it’s going to happen this way, that pDoom is 90 percent plus—how can you know this?
The simplest way for Doom to not be possible is simply that the compute requirements are too high and there are not enough GPUs on earth and won’t be for decades. The less simple way is that a machine humans built that IS controllable may not be as stupid as we think in utility terms vs an unconstrained machine. (So as long as the constrained machines have more resources—weapons I mean—under their control, they can methodically hunt down and burn out any rampant escapees. Burn refers to how you would use a flamethrower against rats or thermite on unauthorized equipment (since you can’t afford to reuse components built by an illegal factory or nanoforge)
Isn’t that a response to a completely different kind of argument?
I am probably not going to discuss this here, since it seems very off-topic, but if you want I can consider putting it on my list for arguments I might discuss in this form in a future article.
I think the strongest “historical” argument is the concept of quenching/entropy/nature abhores complex systems.
What I mean by this is that generations of humans before us, through blood sweat and tears, have built many forms of machine. And they all have flaws and when you run them at full power they eventually fall apart. Quite a bit early in the prototype phase.
And generations of hypesters have over promised as well. This time will be different, this won’t fail, this is safe, the world is about to change. Almost all their proclamations were falsified.
A rampant ASI is this machine you built. And instead of just leaving it’s operating bounds and failing—which I mean all kinda stupid things could end the ASIs run instantly like a segfault that kills every single copy at the same time because a table for tracking peers ran out of memory or similar—we’re predicting it starts as this seed, badly outmatched by the humans and their tools and weapons. It’s so smart it stealthily acquires resources and develops technology and humans are helpless to stop it and it doesn’t fail spontaneously from faults in the software it runs on. Or satisfying it’s value function and shutting down. And its so smart it finds some asymmetry—some way to win against overwhelming odds. And it kills the humans, and takes over the universe, and from the perspective of alien observers they see all the stars dim from Dyson swarms capturing some of the light in an expanding sphere.
Can this happen? Yes. The weight of time and prior examples makes it seem unlikely though. (The weight of time is that it’s about 14 billion years from the hypothesized beginning of the universe and we observe no Dyson swarms and we exist)
It may not BE unlikely. Though the inverse case—having high confidence that it’s going to happen this way, that pDoom is 90 percent plus—how can you know this?
The simplest way for Doom to not be possible is simply that the compute requirements are too high and there are not enough GPUs on earth and won’t be for decades. The less simple way is that a machine humans built that IS controllable may not be as stupid as we think in utility terms vs an unconstrained machine. (So as long as the constrained machines have more resources—weapons I mean—under their control, they can methodically hunt down and burn out any rampant escapees. Burn refers to how you would use a flamethrower against rats or thermite on unauthorized equipment (since you can’t afford to reuse components built by an illegal factory or nanoforge)
Isn’t that a response to a completely different kind of argument? I am probably not going to discuss this here, since it seems very off-topic, but if you want I can consider putting it on my list for arguments I might discuss in this form in a future article.