Ultimate control and getting defeated don’t mesh well.
In Hollywood there a chance that an AGI that gets ultimate control gets afterwards defeated. In the real world not so much.
How you do measure, besides your gut feelings, realisticness of these kinds of scenarios?
Analysis in multiple different ways. Keeping up with the the discourse.
There is no way to assign probabilities accurately
That’s debatable. You can always use Bayesian reasoning but it’s not the main issue of this debate.
Oh thanks, now I see it, these almost-there cases looking somewhat holywoodish, like the villain obligatory should pronounce lingering monologue before actually killing his victim, and thanks God hero appears in last moment.
Okay, Skynet which won’t instantly get rid of humanity is improbable, if really superintelligent, and if it has this goal.
We can easily imagine many kinds of possible catastrophe which can happen, but we are not equally good at producing heavenlike utopian views, but this is an evidence only about our lack of imagination
Ultimate control and getting defeated don’t mesh well. In Hollywood there a chance that an AGI that gets ultimate control gets afterwards defeated. In the real world not so much.
Analysis in multiple different ways. Keeping up with the the discourse.
That’s debatable. You can always use Bayesian reasoning but it’s not the main issue of this debate.
Oh thanks, now I see it, these almost-there cases looking somewhat holywoodish, like the villain obligatory should pronounce lingering monologue before actually killing his victim, and thanks God hero appears in last moment.
Okay, Skynet which won’t instantly get rid of humanity is improbable, if really superintelligent, and if it has this goal.
We can easily imagine many kinds of possible catastrophe which can happen, but we are not equally good at producing heavenlike utopian views, but this is an evidence only about our lack of imagination