Would the pre-human apes as a class, if somehow given enough intelligence to understand the question, have endorsed their own great^n-grandchildren from developing human intelligence? Yes, I think so. I’m pretty sure that if you extrapolated their values to more capable intelligence, it would generally include greater capability of their own descendants in many ways.
Would they endorse what we have done with it? I’m not even sure whether the question is meaningful, since it depends much more strongly upon what extrapolated values and reasoning processes you sneak in to the “somehow given enough intelligence”. There may be plenty of other aspects of human biological and cultural development that they may not have endorsed.
Would animals outside the human roots of ancestry have endorsed any of it at all? I’d guess very likely not, considering what’s happening to most of their descendants. Looked at from the point of view of “good for the Earth”, I’m not even sure what the question means. The Earth itself is a huge ball of inanimate rocks. From the point of view of most species on it, the development of human intelligence has been a catastrophic mass of extinctions. Have we even affected the Universe beyond Earth in any salient manner? What sort of thing qualifies as good (or bad) for the universe anyway?
One thing does seem fairly likely: my belief is that in the event that AI does lead to human extinction, then it would very likely also lead to the extinction of every other form of life on Earth and probably a long way beyond. This makes the question much less human-centric, and seems to address many of your questions.
Possibly the only being(s) that would think it a good thing would be the AI itself/themselves—and it’s not even known whether they will have any values. Even if they do, we can have no idea what they will be. It’s possible to imagine an entity that experiences its own existence with loathing and/or suffering in other ways, but can’t or won’t suicide regardless of that for other reasons.
Would the pre-human apes as a class, if somehow given enough intelligence to understand the question, have endorsed their own great^n-grandchildren from developing human intelligence? Yes, I think so. I’m pretty sure that if you extrapolated their values to more capable intelligence, it would generally include greater capability of their own descendants in many ways.
Would they endorse what we have done with it? I’m not even sure whether the question is meaningful, since it depends much more strongly upon what extrapolated values and reasoning processes you sneak in to the “somehow given enough intelligence”. There may be plenty of other aspects of human biological and cultural development that they may not have endorsed.
Would animals outside the human roots of ancestry have endorsed any of it at all? I’d guess very likely not, considering what’s happening to most of their descendants. Looked at from the point of view of “good for the Earth”, I’m not even sure what the question means. The Earth itself is a huge ball of inanimate rocks. From the point of view of most species on it, the development of human intelligence has been a catastrophic mass of extinctions. Have we even affected the Universe beyond Earth in any salient manner? What sort of thing qualifies as good (or bad) for the universe anyway?
One thing does seem fairly likely: my belief is that in the event that AI does lead to human extinction, then it would very likely also lead to the extinction of every other form of life on Earth and probably a long way beyond. This makes the question much less human-centric, and seems to address many of your questions.
Possibly the only being(s) that would think it a good thing would be the AI itself/themselves—and it’s not even known whether they will have any values. Even if they do, we can have no idea what they will be. It’s possible to imagine an entity that experiences its own existence with loathing and/or suffering in other ways, but can’t or won’t suicide regardless of that for other reasons.