Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging.
Most of the currently best understood forms of dangerous AI are maximizers of something that requires energy or mass. These AI’s will spread throughout the universe at relativistic speeds, converting all mass and energy into its desired form. (this form might be paperclips or computronium or whatever. )
This kind of AI will destroy the civilization that created it, its creators were made of matter it could use for something else. However, it will also be very visible until it reaches and disassembles us. (A growing sphere of stars being disassembled or wrapped in dyson spheres.) An AI that wipes out the creating civilization and then destroys itself is something that could happen, but it seems unlikely that it would happen 99.9% of the time.
Most of the currently best understood forms of dangerous AI are maximizers of something that requires energy or mass. These AI’s will spread throughout the universe at relativistic speeds, converting all mass and energy into its desired form. (this form might be paperclips or computronium or whatever. )
This kind of AI will destroy the civilization that created it, its creators were made of matter it could use for something else. However, it will also be very visible until it reaches and disassembles us. (A growing sphere of stars being disassembled or wrapped in dyson spheres.) An AI that wipes out the creating civilization and then destroys itself is something that could happen, but it seems unlikely that it would happen 99.9% of the time.
Seems like ‘chaos theory’ concepts could be a tipping point catalyst.
What do you mean by that?