FWIW, it seems unlikely that many superintelligent agents would “destroy humanity”—even without particularly safety-conscious programmers. Humanity will have immense historical signficance—and will form part of the clues the superintelligence has about the form of other alien races that it might encounter. Its preservation can therefore be expected to be a common instrumental good.
Counter: superintelligent agents won’t need actually-existing humans to have good models of other alien races.
Counter to the counter: humans use up only a tiny fraction of the resources available in the solar system and surroundings, and who knows, maybe the superintelligence sees a tiny possibility of some sort of limit to the quality of any model relative to the real thing.
One possible counter to the counter to the counter: but when the superintelligence in question is first emerging, killing humanity may buy it a not-quite-as-tiny increment of probability of not being stopped in time.
Re: good models without humans—I figure they are likely to be far more interested in their origins than we are. Before we meet them, aliens will be such an important unknown.
Re: killing humanity—I see the humans vs machines scenarios as grossly unrealistic. Humans and machines are a symbiosis.
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. There was still a fair bit of human-machine conflict in that though.
I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist. Things seem to be getting better.
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. But there was still a lot of human-machine conflict. I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist.
FWIW, it seems unlikely that many superintelligent agents would “destroy humanity”—even without particularly safety-conscious programmers. Humanity will have immense historical signficance—and will form part of the clues the superintelligence has about the form of other alien races that it might encounter. Its preservation can therefore be expected to be a common instrumental good.
Counter: superintelligent agents won’t need actually-existing humans to have good models of other alien races.
Counter to the counter: humans use up only a tiny fraction of the resources available in the solar system and surroundings, and who knows, maybe the superintelligence sees a tiny possibility of some sort of limit to the quality of any model relative to the real thing.
One possible counter to the counter to the counter: but when the superintelligence in question is first emerging, killing humanity may buy it a not-quite-as-tiny increment of probability of not being stopped in time.
Re: good models without humans—I figure they are likely to be far more interested in their origins than we are. Before we meet them, aliens will be such an important unknown.
Re: killing humanity—I see the humans vs machines scenarios as grossly unrealistic. Humans and machines are a symbiosis.
So, it’s less like Terminator and more like The Matrix, right?
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. There was still a fair bit of human-machine conflict in that though.
I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist. Things seem to be getting better.
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. But there was still a lot of human-machine conflict. I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist.