If you’ll pardon updating off of fictional evidence: the malignant AI in “A Fire Upon the Deep” stays hidden until it has the capability to explode across space—it might be the case that an UFAI which was in conflict with its creators would expect more conflict and therefore quiet down.
Also I think the failed FAI concept seems somewhat reasonable—if the AI had some basic friendliness that made it go looking for morality, but in the meantime its moral instincts involved turning people into paperclips rather than pulling babies from in front of trains it might eventually “catch on” and feel really terrible about everything, then decide that it wasn’t able to be confident in its metaethics and it would be better to commit suicide.
Of course I haven’t got much expertise in the subject so I feel like I may have just created a more complicated and therefore less likely scenario than I anticipated. I do still think that various forms of failed FAI (is this a term worth canonizing? An AI with some incomplete friendliness architecture is a very small subset of UFAI) would be relatively populous in the design space of “minds that humans would design,” even if they are rare in the space of all possible minds.
That makes sense. My thoughts were basically along the lines that the space of AIs with goals centered around their creators which later peter out after their creators are destroyed is probably bigger than I gave it credit for.
I’m sad that I don’t get to read the second half of your comment because I haven’t read that book and intend to eventually read as much science fiction recommended hear as possible.
If you’ll pardon updating off of fictional evidence: the malignant AI in “A Fire Upon the Deep” stays hidden until it has the capability to explode across space—it might be the case that an UFAI which was in conflict with its creators would expect more conflict and therefore quiet down.
Also I think the failed FAI concept seems somewhat reasonable—if the AI had some basic friendliness that made it go looking for morality, but in the meantime its moral instincts involved turning people into paperclips rather than pulling babies from in front of trains it might eventually “catch on” and feel really terrible about everything, then decide that it wasn’t able to be confident in its metaethics and it would be better to commit suicide.
Of course I haven’t got much expertise in the subject so I feel like I may have just created a more complicated and therefore less likely scenario than I anticipated. I do still think that various forms of failed FAI (is this a term worth canonizing? An AI with some incomplete friendliness architecture is a very small subset of UFAI) would be relatively populous in the design space of “minds that humans would design,” even if they are rare in the space of all possible minds.
The notion I was thinking of was a program which is tasked to increase dominance for real Americans.
Unfortunately, the specs for real Americans weren’t adequately thought out, so the human race is destroyed.
I don’t think such a program is likely to spread further than the human race did.
More fictional evidence, from John Brunner’s The Jagged Orbit.
N cebtenz vf frg gb znkvzvmvat cebsvgf (be cbffvoyl ergheaf) sbe n jrncbaf pbzcnal. Ntnvafg gur nqivpr bs grpuf, znantrzrag vafvfgf ba gheavat hc gur vapragvirf gbb uvtu (be znxvat gur gvzr senzr gbb fubeg—vg’f orra n juvyr fvapr V’ir ernq vg).
Gur pbzcnal nqiregvfrf cnenabvn—gur bayl jnl gb or fnsr vf gb unir zber naq zber cbjreshy crefbany jrncbaf. Vg oernxf pvivyvmngvba. Ab zber cebsvgf.
VVEP, gur pbzchgre cebtenz vairagf gvzr geniry gb jvcr vgfrys bhg naq erfbyir gur qvyrzan.
Real Americans have watched Star Trek. Of course they’ll tell the AI to go forth and subjugate all the funny-eared aliens across the galaxy :-)
That makes sense. My thoughts were basically along the lines that the space of AIs with goals centered around their creators which later peter out after their creators are destroyed is probably bigger than I gave it credit for.
I’m sad that I don’t get to read the second half of your comment because I haven’t read that book and intend to eventually read as much science fiction recommended hear as possible.