Reason: Cockroaches and the behavior of humans. We can and do kill individuals and specific groups of individuals. We can’t kill all of them, however. If humans can get into space, the lightspeed barrier might let far-flung tribes of “human fundamentalists,” to borrow a term from Charles Stross, to survive, though individuals would often be killed and would never stand a chance in a direct conflict with a super AI.
In itself that doesn’t seem to be relevant evidence. “There exist species that humans cannot eradicate without major coordinated effort”. It doesn’t follow that either the same would hold for far more powerful AIs, nor that we should model AI-human relationship on humans-cockroaches rather than humans-kittens or humans-smallpox.
If humans can get into space, the lightspeed barrier might let far-flung tribes of “human fundamentalists,” to borrow a term from Charles Stross, to survive
It’s easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don’t have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness.
I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically. (For one thing, there’s still no evidence human space colonization or even solar system colonization will happen anytime soon. And unlike AI it’s not going to happen suddenly, without lots of advanced notice.)
It’s easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don’t have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness.
…
I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically.
A summary of your points is that: while conceivable, there’s no reason to think it’s at all likely. Ok. How about, “Because it’s fun to think about?”
Actually, lasers might not be practical against maneuverable targets because of the diffraction limit and the lightspeed limit. In order to focus a laser at very great distances, one would need very large lenses. (Perhaps planet sized, depending on distance and frequency.) Targets could respond by moving out of the beam, and the lightspeed limit would preclude immediate retargeting. Compensating for this by making the beam wider would be very expensive.
Regarding lasers: I could list things the attackers might do to succeed. But I don’t want to discuss it because we’d be speculating on practically zero evidence. I’ll merely say that I would rather that my hopes for the future do not depend on a failure of imagination on part of an enemy superintelligent AI.
You’re assuming that there’s always an answer for the more intelligent actor. Only happens that way in the movies. Sometimes you get the bear, and sometimes the bear gets you.
Sometimes one can pin their hopes on the laws of physics in the face of a more intelligent foe.
What reason is there to expect such a thing?
(Not to mention that, proverbs notwithstanding, humans can and do kill cockroaches easily; I wouldn’t want the tables to be reversed.)
Reason: Cockroaches and the behavior of humans. We can and do kill individuals and specific groups of individuals. We can’t kill all of them, however. If humans can get into space, the lightspeed barrier might let far-flung tribes of “human fundamentalists,” to borrow a term from Charles Stross, to survive, though individuals would often be killed and would never stand a chance in a direct conflict with a super AI.
In itself that doesn’t seem to be relevant evidence. “There exist species that humans cannot eradicate without major coordinated effort”. It doesn’t follow that either the same would hold for far more powerful AIs, nor that we should model AI-human relationship on humans-cockroaches rather than humans-kittens or humans-smallpox.
It’s easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don’t have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness.
I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically. (For one thing, there’s still no evidence human space colonization or even solar system colonization will happen anytime soon. And unlike AI it’s not going to happen suddenly, without lots of advanced notice.)
A summary of your points is that: while conceivable, there’s no reason to think it’s at all likely. Ok. How about, “Because it’s fun to think about?”
Actually, lasers might not be practical against maneuverable targets because of the diffraction limit and the lightspeed limit. In order to focus a laser at very great distances, one would need very large lenses. (Perhaps planet sized, depending on distance and frequency.) Targets could respond by moving out of the beam, and the lightspeed limit would preclude immediate retargeting. Compensating for this by making the beam wider would be very expensive.
Regarding lasers: I could list things the attackers might do to succeed. But I don’t want to discuss it because we’d be speculating on practically zero evidence. I’ll merely say that I would rather that my hopes for the future do not depend on a failure of imagination on part of an enemy superintelligent AI.
You’re assuming that there’s always an answer for the more intelligent actor. Only happens that way in the movies. Sometimes you get the bear, and sometimes the bear gets you.
Sometimes one can pin their hopes on the laws of physics in the face of a more intelligent foe.
It’s more fun to me to think about pleasant extremely improbable futures than unpleasant ones. To each their own.
There’s lots of scope for great adventure stories in dystopian futures.