First, there are scenarios where human race is standing on the edge of extinction, but somehow ables to fight back and surive, call that Skynet scenario.
Yeah I was talking about that terminator guy in terms that AI got ultimate control and used its power against humans, but was defeated, it is not obligatory to include here cyborgs time-travellers.
How you do measure, besides your gut feelings, realisticness of these kinds of scenarios? There is no way to assign probabilities accurately, all we can and should do is imagine as much consequences as possible
Ultimate control and getting defeated don’t mesh well.
In Hollywood there a chance that an AGI that gets ultimate control gets afterwards defeated. In the real world not so much.
How you do measure, besides your gut feelings, realisticness of these kinds of scenarios?
Analysis in multiple different ways. Keeping up with the the discourse.
There is no way to assign probabilities accurately
That’s debatable. You can always use Bayesian reasoning but it’s not the main issue of this debate.
Oh thanks, now I see it, these almost-there cases looking somewhat holywoodish, like the villain obligatory should pronounce lingering monologue before actually killing his victim, and thanks God hero appears in last moment.
Okay, Skynet which won’t instantly get rid of humanity is improbable, if really superintelligent, and if it has this goal.
We can easily imagine many kinds of possible catastrophe which can happen, but we are not equally good at producing heavenlike utopian views, but this is an evidence only about our lack of imagination
Skynet is no realistic scenario.
After someone goes out and films Clippy: The Movie, will we be also prevented from using Clippy as shorthand for a specific hypothetical AI scenario?
If you don’t mean skynet as skynet in terminator what do you mean with the skynet?
Yeah I was talking about that terminator guy in terms that AI got ultimate control and used its power against humans, but was defeated, it is not obligatory to include here cyborgs time-travellers.
How you do measure, besides your gut feelings, realisticness of these kinds of scenarios? There is no way to assign probabilities accurately, all we can and should do is imagine as much consequences as possible
Ultimate control and getting defeated don’t mesh well. In Hollywood there a chance that an AGI that gets ultimate control gets afterwards defeated. In the real world not so much.
Analysis in multiple different ways. Keeping up with the the discourse.
That’s debatable. You can always use Bayesian reasoning but it’s not the main issue of this debate.
Oh thanks, now I see it, these almost-there cases looking somewhat holywoodish, like the villain obligatory should pronounce lingering monologue before actually killing his victim, and thanks God hero appears in last moment.
Okay, Skynet which won’t instantly get rid of humanity is improbable, if really superintelligent, and if it has this goal.
We can easily imagine many kinds of possible catastrophe which can happen, but we are not equally good at producing heavenlike utopian views, but this is an evidence only about our lack of imagination