Do you think your AI research has implications for this situation? It seems to me that going from our idiot god, toward self-engineering intelligence is a step up by an order of magnitude, so that such a “metaengineer” could, in fact, choose to optimize for species survival, or some other virtue that it chose.
I think the notion of “species” for a superintelligence doesn’t really follow because I don’t see the idea of “individual” surviving unambiguously in such a scenario, but I think my question still makes some sense: if evolution kills its creations by selecting for short term individual fitness at the expense of the species, do you think the next step of life, having been intelligently designed, will change the nature of that problem entirely?
Do you think your AI research has implications for this situation? It seems to me that going from our idiot god, toward self-engineering intelligence is a step up by an order of magnitude, so that such a “metaengineer” could, in fact, choose to optimize for species survival, or some other virtue that it chose.
I think the notion of “species” for a superintelligence doesn’t really follow because I don’t see the idea of “individual” surviving unambiguously in such a scenario, but I think my question still makes some sense: if evolution kills its creations by selecting for short term individual fitness at the expense of the species, do you think the next step of life, having been intelligently designed, will change the nature of that problem entirely?