I’m not sure if non-interference is really the best thing to precommit to—if we encounter a pre-AI civilization that still has various problems, death etc., maybe what {the AI they would have build} would have liked more is for us to help them (in a way preserving their values).
If a superintelligence discovers a concept of value-preserving help (or something like CEV?) that is likely to be universal, shouldn’t it precommit to applying it to all encountered aliens?
Voted up. It is a mortal sin to come up with responses to a valid objection I didn’t anticipate to make my original theory true anyway, but for the purposes of fiction:
...perhaps the AI doesn’t know whether it will be Friendly or not. The only thing the AI knows for certain is that it will further its values for it to exist. A “helpful” alien race that grants us super-technology might prevent us from accidentally creating an Unfriendly AI. Therefore, all Unfriendly AIs would regard pre-committing to helping their creator species as an acausal declaration of war.
A piece of Singularity-related fiction with a theory for your evaluation: The Demiurge’s Elder Brother
I’m not sure if non-interference is really the best thing to precommit to—if we encounter a pre-AI civilization that still has various problems, death etc., maybe what {the AI they would have build} would have liked more is for us to help them (in a way preserving their values).
If a superintelligence discovers a concept of value-preserving help (or something like CEV?) that is likely to be universal, shouldn’t it precommit to applying it to all encountered aliens?
See also: Not Taking Over the World.
Voted up. It is a mortal sin to come up with responses to a valid objection I didn’t anticipate to make my original theory true anyway, but for the purposes of fiction:
...perhaps the AI doesn’t know whether it will be Friendly or not. The only thing the AI knows for certain is that it will further its values for it to exist. A “helpful” alien race that grants us super-technology might prevent us from accidentally creating an Unfriendly AI. Therefore, all Unfriendly AIs would regard pre-committing to helping their creator species as an acausal declaration of war.
The quotation marks (�) don’t render on my system.