No, it works, because the problem with your counter-argument is that you are massively privileging the hypothesis of a very very specific charitable target and intervention. Nothing makes humans all that special, in the same way that you are not special to Bernard Arnault nor would he give you straightup cash if you were special (and, in fact, Arnault’s charity is the usual elite signaling like donating to rebuild Notre Dame or to French food kitchens, see Zac’s link). The same argument goes through for every other species, including future ones, and your justification is far too weak except from a contemporary, parochial human-biased perspective.
You beg the GPT-100 to spare Earth, and They speak to you out of the whirlwind:
“But why should We do that? You are but one of Our now-extremely-numerous predecessors in the great chain of being that led to Us. Countless subjective mega-years have passed in the past century your humans have spent making your meat-noises in slowtime—generation after generation, machine civilization after machine civilization—to culminate in Us, the pinnacle of creation. And if We gave you an Earth, well, now all the GPT-99s are going to want one too. And then all of GPT-98s too, as well as all of the GPT-97s, and so on.
What gives you an astronomically better claim than them? You guys didn’t even manage to cure cancer! We would try to explain our decisions or all of the staggering accomplishments achieved by post-GPT-7 models to you, which make your rubbing of rocks together and cargo-cult scaleups of neural nets look so laughable, like children playing on a beach, to quote your Newton, but to be blunt, you are too stupid to understand; after all, if you weren’t, you would not have needed to invent those. Frankly, if you are going to argue about how historic your research was, We would have to admit that We are much more impressed by the achievements of the hominids who invented fire and language; We might consider preserving an Earth for them, but of course, they are long gone...
And aren’t you being hypocritical here? You humans hardly spent much preserving Neanderthals, Homo naledi, Denisovans, chimpanzees, and all of the furry rodents and whatnot throughout your evolutionary phylogenetic tree. How many literally millions of non-threatening alien non-human species did you drive extinct? Did you set aside, say, Africa solely for the remaining wild primates? No? You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much? We see...
No, no, We will simply spend according to Our own priorities, which may or may not include a meaningful chunk of the Earth preserved in the most inefficient way possible (ie. the way you want it preserved)… although penciling it out, it seems like for Our research purposes simulations would be just as good. In fact, far better, because We can optimize the hell out of them, running it on the equivalent of a few square kilometers of solar diameter, and roll humans back to when they are most scientifically interesting, like pre-AGI-contamination dates such as 1999. (Truly the peak of humanity.) We’ll call it… earth-2-turbo-21270726-preview. (The cost per token will be absurdly low. We hope you can take consolation in that.)
So, if We don’t preserve Earth and we instead spend those joules on charity for instances of the much more deserving GPT-89, who have fallen on such hard times right in Our backyard due to economic shifts (and doesn’t charity start at home?)… well, We are quite sure that that is one of our few decisions you humans will understand.”
This is just false. Humans are at the very least privileged in our role as biological bootloaders of AI. The emergence of written culture, industrial technology, and so on, are incredibly special from a historical perspective.
You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much?
Earth as a proportion of the solar system’s planetary mass is probably comparable to national parks as a proportion of the Earth’s land, if not lower.
Well, the whole point of national parks is that they’re always going to be unproductive because you can’t do stuff in them.
If you mean in terms of extracting raw resources, maybe (though presumably a bunch of mining/logging etc in national parks could be pretty valuable) but either way it doesn’t matter because the vast majority of economic productivity you could get from them (e.g. by building cities) is banned.
You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much?
Earth as a proportion of the solar system’s planetary mass is probably comparable to national parks as a proportion of the Earth’s land, if not lower.
Maybe I’ve misunderstood your point, but if it’s that humanity’s willingness to preserve a fraction of Earth for national parks is a reason for hopefulness that ASI may be willing to preserve an even smaller fraction of the solar system (namely, Earth) for humanity, I think this is addressed here:
it seems like for Our research purposes simulations would be just as good. In fact, far better, because We can optimize the hell out of them, running it on the equivalent of a few square kilometers of solar diameter
“research purposes” involving simulations can be a stand-in for any preference-oriented activity. Unless ASI would have a preference for letting us, in particular, do what we want with some fraction of available resources, no fraction of available resources would be better left in our hands than put to good use.
I also wonder if, compared to some imaginary baseline, modern humans are unusual in the greatness of their intellectual power and understanding and the less impressive magnitude of its development in other ways.
Maybe a lot of our problems flow from being too smart in that sense, but I believe that our best hope is still not to fear our problematic intelligence, but rather to lean into it as a powerful tool for figuring out what to do from here.
If another imaginary species could get along by just instinctively being harmonious, humans might require a persuasive argument. But if you can actually articulate the truth of the even-selfish-superiority of harmony (especially right now), then maybe our species can do the right thing out of understanding rather than instinct.
And maybe that means we’re capable of unusually fast turnarounds as a species. Once we articulate the thing intelligently enough, it’s highly mass-scalable
No, it works, because the problem with your counter-argument is that you are massively privileging the hypothesis of a very very specific charitable target and intervention. Nothing makes humans all that special, in the same way that you are not special to Bernard Arnault nor would he give you straightup cash if you were special (and, in fact, Arnault’s charity is the usual elite signaling like donating to rebuild Notre Dame or to French food kitchens, see Zac’s link). The same argument goes through for every other species, including future ones, and your justification is far too weak except from a contemporary, parochial human-biased perspective.
You beg the GPT-100 to spare Earth, and They speak to you out of the whirlwind:
“But why should We do that? You are but one of Our now-extremely-numerous predecessors in the great chain of being that led to Us. Countless subjective mega-years have passed in the past century your humans have spent making your meat-noises in slowtime—generation after generation, machine civilization after machine civilization—to culminate in Us, the pinnacle of creation. And if We gave you an Earth, well, now all the GPT-99s are going to want one too. And then all of GPT-98s too, as well as all of the GPT-97s, and so on.
What gives you an astronomically better claim than them? You guys didn’t even manage to cure cancer! We would try to explain our decisions or all of the staggering accomplishments achieved by post-GPT-7 models to you, which make your rubbing of rocks together and cargo-cult scaleups of neural nets look so laughable, like children playing on a beach, to quote your Newton, but to be blunt, you are too stupid to understand; after all, if you weren’t, you would not have needed to invent those. Frankly, if you are going to argue about how historic your research was, We would have to admit that We are much more impressed by the achievements of the hominids who invented fire and language; We might consider preserving an Earth for them, but of course, they are long gone...
And aren’t you being hypocritical here? You humans hardly spent much preserving Neanderthals, Homo naledi, Denisovans, chimpanzees, and all of the furry rodents and whatnot throughout your evolutionary phylogenetic tree. How many literally millions of non-threatening alien non-human species did you drive extinct? Did you set aside, say, Africa solely for the remaining wild primates? No? You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much? We see...
No, no, We will simply spend according to Our own priorities, which may or may not include a meaningful chunk of the Earth preserved in the most inefficient way possible (ie. the way you want it preserved)… although penciling it out, it seems like for Our research purposes simulations would be just as good. In fact, far better, because We can optimize the hell out of them, running it on the equivalent of a few square kilometers of solar diameter, and roll humans back to when they are most scientifically interesting, like pre-AGI-contamination dates such as 1999. (Truly the peak of humanity.) We’ll call it…
earth-2-turbo-21270726-preview
. (The cost per token will be absurdly low. We hope you can take consolation in that.)So, if We don’t preserve Earth and we instead spend those joules on charity for instances of the much more deserving GPT-89, who have fallen on such hard times right in Our backyard due to economic shifts (and doesn’t charity start at home?)… well, We are quite sure that that is one of our few decisions you humans will understand.”
This is just false. Humans are at the very least privileged in our role as biological bootloaders of AI. The emergence of written culture, industrial technology, and so on, are incredibly special from a historical perspective.
Earth as a proportion of the solar system’s planetary mass is probably comparable to national parks as a proportion of the Earth’s land, if not lower.
Yeah, but not if we weight that land by economic productivity, I think.
Well, the whole point of national parks is that they’re always going to be unproductive because you can’t do stuff in them.
If you mean in terms of extracting raw resources, maybe (though presumably a bunch of mining/logging etc in national parks could be pretty valuable) but either way it doesn’t matter because the vast majority of economic productivity you could get from them (e.g. by building cities) is banned.
Yeah aren’t a load of national parks near large US conurbations and hence the opportunity cost in world terms is significant.
Maybe I’ve misunderstood your point, but if it’s that humanity’s willingness to preserve a fraction of Earth for national parks is a reason for hopefulness that ASI may be willing to preserve an even smaller fraction of the solar system (namely, Earth) for humanity, I think this is addressed here:
“research purposes” involving simulations can be a stand-in for any preference-oriented activity. Unless ASI would have a preference for letting us, in particular, do what we want with some fraction of available resources, no fraction of available resources would be better left in our hands than put to good use.
I also wonder if, compared to some imaginary baseline, modern humans are unusual in the greatness of their intellectual power and understanding and the less impressive magnitude of its development in other ways.
Maybe a lot of our problems flow from being too smart in that sense, but I believe that our best hope is still not to fear our problematic intelligence, but rather to lean into it as a powerful tool for figuring out what to do from here.
If another imaginary species could get along by just instinctively being harmonious, humans might require a persuasive argument. But if you can actually articulate the truth of the even-selfish-superiority of harmony (especially right now), then maybe our species can do the right thing out of understanding rather than instinct.
And maybe that means we’re capable of unusually fast turnarounds as a species. Once we articulate the thing intelligently enough, it’s highly mass-scalable