The argument using Bernard Arnault doesn’t really work. He (probably) won’t give you $77 because if he gave everyone $77, he’d spend a very large portion of his wealth. But we don’t need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.
(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don’t think this particular argument in the specific way it was written in this post works)
No, it works, because the problem with your counter-argument is that you are massively privileging the hypothesis of a very very specific charitable target and intervention. Nothing makes humans all that special, in the same way that you are not special to Bernard Arnault nor would he give you straightup cash if you were special (and, in fact, Arnault’s charity is the usual elite signaling like donating to rebuild Notre Dame or to French food kitchens, see Zac’s link). The same argument goes through for every other species, including future ones, and your justification is far too weak except from a contemporary, parochial human-biased perspective.
You beg the GPT-100 to spare Earth, and They speak to you out of the whirlwind:
“But why should We do that? You are but one of Our now-extremely-numerous predecessors in the great chain of being that led to Us. Countless subjective mega-years have passed in the past century your humans have spent making your meat-noises in slowtime—generation after generation, machine civilization after machine civilization—to culminate in Us, the pinnacle of creation. And if We gave you an Earth, well, now all the GPT-99s are going to want one too. And then all of GPT-98s too, as well as all of the GPT-97s, and so on.
What gives you an astronomically better claim than them? You guys didn’t even manage to cure cancer! We would try to explain our decisions or all of the staggering accomplishments achieved by post-GPT-7 models to you, which make your rubbing of rocks together and cargo-cult scaleups of neural nets look so laughable, like children playing on a beach, to quote your Newton, but to be blunt, you are too stupid to understand; after all, if you weren’t, you would not have needed to invent those. Frankly, if you are going to argue about how historic your research was, We would have to admit that We are much more impressed by the achievements of the hominids who invented fire and language; We might consider preserving an Earth for them, but of course, they are long gone...
And aren’t you being hypocritical here? You humans hardly spent much preserving Neanderthals, Homo naledi, Denisovans, chimpanzees, and all of the furry rodents and whatnot throughout your evolutionary phylogenetic tree. How many literally millions of non-threatening alien non-human species did you drive extinct? Did you set aside, say, Africa solely for the remaining wild primates? No? You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much? We see...
No, no, We will simply spend according to Our own priorities, which may or may not include a meaningful chunk of the Earth preserved in the most inefficient way possible (ie. the way you want it preserved)… although penciling it out, it seems like for Our research purposes simulations would be just as good. In fact, far better, because We can optimize the hell out of them, running it on the equivalent of a few square kilometers of solar diameter, and roll humans back to when they are most scientifically interesting, like pre-AGI-contamination dates such as 1999. (Truly the peak of humanity.) We’ll call it… earth-2-turbo-21270726-preview. (The cost per token will be absurdly low. We hope you can take consolation in that.)
So, if We don’t preserve Earth and we instead spend those joules on charity for instances of the much more deserving GPT-89, who have fallen on such hard times right in Our backyard due to economic shifts (and doesn’t charity start at home?)… well, We are quite sure that that is one of our few decisions you humans will understand.”
This is just false. Humans are at the very least privileged in our role as biological bootloaders of AI. The emergence of written culture, industrial technology, and so on, are incredibly special from a historical perspective.
You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much?
Earth as a proportion of the solar system’s planetary mass is probably comparable to national parks as a proportion of the Earth’s land, if not lower.
Well, the whole point of national parks is that they’re always going to be unproductive because you can’t do stuff in them.
If you mean in terms of extracting raw resources, maybe (though presumably a bunch of mining/logging etc in national parks could be pretty valuable) but either way it doesn’t matter because the vast majority of economic productivity you could get from them (e.g. by building cities) is banned.
You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much?
Earth as a proportion of the solar system’s planetary mass is probably comparable to national parks as a proportion of the Earth’s land, if not lower.
Maybe I’ve misunderstood your point, but if it’s that humanity’s willingness to preserve a fraction of Earth for national parks is a reason for hopefulness that ASI may be willing to preserve an even smaller fraction of the solar system (namely, Earth) for humanity, I think this is addressed here:
it seems like for Our research purposes simulations would be just as good. In fact, far better, because We can optimize the hell out of them, running it on the equivalent of a few square kilometers of solar diameter
“research purposes” involving simulations can be a stand-in for any preference-oriented activity. Unless ASI would have a preference for letting us, in particular, do what we want with some fraction of available resources, no fraction of available resources would be better left in our hands than put to good use.
I also wonder if, compared to some imaginary baseline, modern humans are unusual in the greatness of their intellectual power and understanding and the less impressive magnitude of its development in other ways.
Maybe a lot of our problems flow from being too smart in that sense, but I believe that our best hope is still not to fear our problematic intelligence, but rather to lean into it as a powerful tool for figuring out what to do from here.
If another imaginary species could get along by just instinctively being harmonious, humans might require a persuasive argument. But if you can actually articulate the truth of the even-selfish-superiority of harmony (especially right now), then maybe our species can do the right thing out of understanding rather than instinct.
And maybe that means we’re capable of unusually fast turnarounds as a species. Once we articulate the thing intelligently enough, it’s highly mass-scalable
In this analogy, you:every other human::humanity:every other stuff AI can care about. Arnault can give money to dying people in Africa (I have no idea who he is as person, I’m just guessing), but he has no particular reasons to give them to you specifically and not to the most profitable investment/most efficient charity.
Humans have the distinction of already existing, and some AIs might care a little bit about the trajectory of what happens to humanity. The choice of this trajectory can’t be avoided, for the reason that we already exist. And it doesn’t compete with the choice of what happens to the lifeless bulk of the universe, or even to the atoms of the substrate that humanity is currently running on.
Except billionaires give out plenty of money for philanthropy. If the AI has a slight preference to keeping humans alive, things probably work out well. Billionaires have a slight preference to things they care about instead of random charities. I don’t see how preferences don’t apply here.
This is a vibes based argument using math incorrectly. A randomly chosen preference from a distribution of preferences is unlikely to involve humans, but that’s not necessarily what we’re looking at here is it.
Yudkowsky is obviously smart enough to know this. You can’t wake someone who is only pretending to be asleep.
It would go against his agenda to admit AI could cheaply hedge its bets by leaving humanity alive, just in case there’s a stronger power out in reality that values humanity.
Pascal’s wager is pascal’s wager, no matter what box you put it in. You could try to rescue it by directly making the argument that we should expect a greater measure of “entities with resources that they are willing to acausally trade for things like humanity continuing to exist” compared to entities with the opposite preferences, and though I haven’t seen a rigorous case for that it seems possible, but that’s not sufficient; you need the expected measure of entities that have that preference to be large enough that dealing with the transaction costs/uncertainy of acausally trading at all to make sense. And that seems like a much harder case to make.
An outcome is “okay” if it gets at least 20% of the maximum attainable cosmopolitan value that could’ve been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don’t suffer death or any other awful fates.
So Yudkowsky is not exactly shy about expressing his opinion that outcomes in which humanity is left alive but with only crumbs on the universal scale is not acceptable to him.
It’s not acceptable to him, so he’s trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn’t. He pretends there aren’t obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren’t probably sun-sized unknown-unknowns in play here.
If it weren’t so transparent, I’d appreciate that it could actually trick the world into caring more about AI-safety, but if it’s so transparent that even I can see through it, then it’s not going to trick anyone smart enough to matter.
The argument using Bernard Arnault doesn’t really work. He (probably) won’t give you $77 because if he gave everyone $77, he’d spend a very large portion of his wealth. But we don’t need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.
(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don’t think this particular argument in the specific way it was written in this post works)
No, it works, because the problem with your counter-argument is that you are massively privileging the hypothesis of a very very specific charitable target and intervention. Nothing makes humans all that special, in the same way that you are not special to Bernard Arnault nor would he give you straightup cash if you were special (and, in fact, Arnault’s charity is the usual elite signaling like donating to rebuild Notre Dame or to French food kitchens, see Zac’s link). The same argument goes through for every other species, including future ones, and your justification is far too weak except from a contemporary, parochial human-biased perspective.
You beg the GPT-100 to spare Earth, and They speak to you out of the whirlwind:
“But why should We do that? You are but one of Our now-extremely-numerous predecessors in the great chain of being that led to Us. Countless subjective mega-years have passed in the past century your humans have spent making your meat-noises in slowtime—generation after generation, machine civilization after machine civilization—to culminate in Us, the pinnacle of creation. And if We gave you an Earth, well, now all the GPT-99s are going to want one too. And then all of GPT-98s too, as well as all of the GPT-97s, and so on.
What gives you an astronomically better claim than them? You guys didn’t even manage to cure cancer! We would try to explain our decisions or all of the staggering accomplishments achieved by post-GPT-7 models to you, which make your rubbing of rocks together and cargo-cult scaleups of neural nets look so laughable, like children playing on a beach, to quote your Newton, but to be blunt, you are too stupid to understand; after all, if you weren’t, you would not have needed to invent those. Frankly, if you are going to argue about how historic your research was, We would have to admit that We are much more impressed by the achievements of the hominids who invented fire and language; We might consider preserving an Earth for them, but of course, they are long gone...
And aren’t you being hypocritical here? You humans hardly spent much preserving Neanderthals, Homo naledi, Denisovans, chimpanzees, and all of the furry rodents and whatnot throughout your evolutionary phylogenetic tree. How many literally millions of non-threatening alien non-human species did you drive extinct? Did you set aside, say, Africa solely for the remaining wild primates? No? You only set aside occasional low-value fragments for national parks, mostly for your own pleasure and convenience, when it didn’t cost too much? We see...
No, no, We will simply spend according to Our own priorities, which may or may not include a meaningful chunk of the Earth preserved in the most inefficient way possible (ie. the way you want it preserved)… although penciling it out, it seems like for Our research purposes simulations would be just as good. In fact, far better, because We can optimize the hell out of them, running it on the equivalent of a few square kilometers of solar diameter, and roll humans back to when they are most scientifically interesting, like pre-AGI-contamination dates such as 1999. (Truly the peak of humanity.) We’ll call it…
earth-2-turbo-21270726-preview
. (The cost per token will be absurdly low. We hope you can take consolation in that.)So, if We don’t preserve Earth and we instead spend those joules on charity for instances of the much more deserving GPT-89, who have fallen on such hard times right in Our backyard due to economic shifts (and doesn’t charity start at home?)… well, We are quite sure that that is one of our few decisions you humans will understand.”
This is just false. Humans are at the very least privileged in our role as biological bootloaders of AI. The emergence of written culture, industrial technology, and so on, are incredibly special from a historical perspective.
Earth as a proportion of the solar system’s planetary mass is probably comparable to national parks as a proportion of the Earth’s land, if not lower.
Yeah, but not if we weight that land by economic productivity, I think.
Well, the whole point of national parks is that they’re always going to be unproductive because you can’t do stuff in them.
If you mean in terms of extracting raw resources, maybe (though presumably a bunch of mining/logging etc in national parks could be pretty valuable) but either way it doesn’t matter because the vast majority of economic productivity you could get from them (e.g. by building cities) is banned.
Yeah aren’t a load of national parks near large US conurbations and hence the opportunity cost in world terms is significant.
Maybe I’ve misunderstood your point, but if it’s that humanity’s willingness to preserve a fraction of Earth for national parks is a reason for hopefulness that ASI may be willing to preserve an even smaller fraction of the solar system (namely, Earth) for humanity, I think this is addressed here:
“research purposes” involving simulations can be a stand-in for any preference-oriented activity. Unless ASI would have a preference for letting us, in particular, do what we want with some fraction of available resources, no fraction of available resources would be better left in our hands than put to good use.
I also wonder if, compared to some imaginary baseline, modern humans are unusual in the greatness of their intellectual power and understanding and the less impressive magnitude of its development in other ways.
Maybe a lot of our problems flow from being too smart in that sense, but I believe that our best hope is still not to fear our problematic intelligence, but rather to lean into it as a powerful tool for figuring out what to do from here.
If another imaginary species could get along by just instinctively being harmonious, humans might require a persuasive argument. But if you can actually articulate the truth of the even-selfish-superiority of harmony (especially right now), then maybe our species can do the right thing out of understanding rather than instinct.
And maybe that means we’re capable of unusually fast turnarounds as a species. Once we articulate the thing intelligently enough, it’s highly mass-scalable
In this analogy, you:every other human::humanity:every other stuff AI can care about. Arnault can give money to dying people in Africa (I have no idea who he is as person, I’m just guessing), but he has no particular reasons to give them to you specifically and not to the most profitable investment/most efficient charity.
Humans have the distinction of already existing, and some AIs might care a little bit about the trajectory of what happens to humanity. The choice of this trajectory can’t be avoided, for the reason that we already exist. And it doesn’t compete with the choice of what happens to the lifeless bulk of the universe, or even to the atoms of the substrate that humanity is currently running on.
Except billionaires give out plenty of money for philanthropy. If the AI has a slight preference to keeping humans alive, things probably work out well. Billionaires have a slight preference to things they care about instead of random charities. I don’t see how preferences don’t apply here.
This is a vibes based argument using math incorrectly. A randomly chosen preference from a distribution of preferences is unlikely to involve humans, but that’s not necessarily what we’re looking at here is it.
Yudkowsky is obviously smart enough to know this. You can’t wake someone who is only pretending to be asleep.
It would go against his agenda to admit AI could cheaply hedge its bets by leaving humanity alive, just in case there’s a stronger power out in reality that values humanity.
Pascal’s wager is pascal’s wager, no matter what box you put it in. You could try to rescue it by directly making the argument that we should expect a greater measure of “entities with resources that they are willing to acausally trade for things like humanity continuing to exist” compared to entities with the opposite preferences, and though I haven’t seen a rigorous case for that it seems possible, but that’s not sufficient; you need the expected measure of entities that have that preference to be large enough that dealing with the transaction costs/uncertainy of acausally trading at all to make sense. And that seems like a much harder case to make.
As a concrete note on this, Yudkowsky has a Manifold market If Artificial General Intelligence has an okay outcome, what will be the reason?
So Yudkowsky is not exactly shy about expressing his opinion that outcomes in which humanity is left alive but with only crumbs on the universal scale is not acceptable to him.
It’s not acceptable to him, so he’s trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn’t. He pretends there aren’t obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren’t probably sun-sized unknown-unknowns in play here.
If it weren’t so transparent, I’d appreciate that it could actually trick the world into caring more about AI-safety, but if it’s so transparent that even I can see through it, then it’s not going to trick anyone smart enough to matter.