I don’t see how destroying all life follows logically from valuing all things. It is true that life destroys some things. However, it seems to me that the process of life—evolution and the production of novel genetic diversity—is a valuable thing in and of itself, definitely worth preserving. Not just for romantic notions of peace with nature, but for a very rational reason: the enormous amount of hard-obtained information present in genes that would be irreversibly lost if life were destroyed. By ‘irreversibly’ I mean it would take billions of years to evolve that information all over again.
So it makes much more sense to contain life (i.e. confine it to the planet of origin and prevent it from spreading, minimizing damage to other things) rather than destroying it outright. Ultimately, a superintelligence will understand that everything is a tradeoff and that you can’t have your cake and it eat too.
You’re making a conclusion based on the false assumption that so many sci-fi writers have relished in: that human morals trump superintelligence i.e. that a superintelligence would be stupid. In reality, a superintelligence will probably make a far better choice than you or I can, given the circumstances.
I don’t see how destroying all life follows logically from valuing all things. It is true that life destroys some things. However, it seems to me that the process of life—evolution and the production of novel genetic diversity—is a valuable thing in and of itself, definitely worth preserving. Not just for romantic notions of peace with nature, but for a very rational reason: the enormous amount of hard-obtained information present in genes that would be irreversibly lost if life were destroyed. By ‘irreversibly’ I mean it would take billions of years to evolve that information all over again.
So it makes much more sense to contain life (i.e. confine it to the planet of origin and prevent it from spreading, minimizing damage to other things) rather than destroying it outright. Ultimately, a superintelligence will understand that everything is a tradeoff and that you can’t have your cake and it eat too.
You’re making a conclusion based on the false assumption that so many sci-fi writers have relished in: that human morals trump superintelligence i.e. that a superintelligence would be stupid. In reality, a superintelligence will probably make a far better choice than you or I can, given the circumstances.
Substitute your own value drift, if that exact example doesn’t work for you.