If a child does not receive love, is not allowed to play, gets only instructions and is beaten—you will get a few years later a traumatized paranoic human being, unable to love, nihilistic and dangerous. A socialization like this could be the outcome of a “successful” self improving AI project. If humanity tries to develop an antagonist AI it could end in a final world war. The nihilistic paranoic AI might find a lose-lose strategy favorable and destroys our world.
That we did not receive any notion of extraterrestial intelligence tells us that obviously no other intelligent civilization has managed to survive a million years. Why they collapsed is pure speculation, but evil AI could speed things up.
It would collapse within apocalypse. It might trigger aggressive actions knowing to be eradicated itself. It wants to see the other lose. Dying is not connected with fear. If it can prevent the galaxy from being colonised by good AI it prefers perfect apocalypse. Debating aftermath of apocalypse gets too speculative to me. I wanted to point out that current projects do not have the intention to create a balanced good AI character. Projects are looking for fast success and an evil paranoic AI might result in the far end.
If a child does not receive love, is not allowed to play, gets only instructions and is beaten—you will get a few years later a traumatized paranoic human being, unable to love, nihilistic and dangerous. A socialization like this could be the outcome of a “successful” self improving AI project. If humanity tries to develop an antagonist AI it could end in a final world war. The nihilistic paranoic AI might find a lose-lose strategy favorable and destroys our world.
That we did not receive any notion of extraterrestial intelligence tells us that obviously no other intelligent civilization has managed to survive a million years. Why they collapsed is pure speculation, but evil AI could speed things up.
But why collapsed evil AI after apocalypse?
It would collapse within apocalypse. It might trigger aggressive actions knowing to be eradicated itself. It wants to see the other lose. Dying is not connected with fear. If it can prevent the galaxy from being colonised by good AI it prefers perfect apocalypse.
Debating aftermath of apocalypse gets too speculative to me. I wanted to point out that current projects do not have the intention to create a balanced good AI character. Projects are looking for fast success and an evil paranoic AI might result in the far end.