When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.
Now I am thankful because your comment enlarge possibilities to think about Fermi.
We could not think only self destruction—we could think modesty and self sustainability.
Sauron’s ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).
We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.
When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.
Now I am thankful because your comment enlarge possibilities to think about Fermi.
We could not think only self destruction—we could think modesty and self sustainability.
Sauron’s ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).
We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.