if there’s a bunch of superintelligences running around and they don’t care about you—no, they will not spare just a little sunlight to keep Earth alive.
The reason I think this is important is because “[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates”: if you write 3000 words inveighing against people who think comparative advantage means that horses can’t get sent to glue factories, that doesn’t license the conclusion that superintelligence Will Definitely Kill You if there are other reasons why superintelligence Might Not Kill You that don’t stop being real just because very few people have the expertise to formulate them carefully.
(An important caveat: the possibility of superintelligences having human-regarding preferences may or may not be comforting: as a fictional illustration of some relevant considerations, the Superhappies in “Three Worlds Collide” cared about the humans to some extent, but not in the specific way that the humans wanted to be cared for.)
Now, you are on the record stating that you “sometimes mention the possibility of being stored and sold to aliens a billion years later, which seems to [you] to validly incorporate most all the hopes and fears and uncertainties that should properly be involved, without getting into any weirdness that [you] don’t expect Earthlings to think about validly.” If that’s all you have to say on the matter, fine. (Given the premise of AIs spending some fraction of their resources on human-regarding preferences, I agree that uploads look a lot more efficient than literally saving the physical Earth!)
No more than Bernard Arnalt, having $170 billion, will surely give you $77.
Bernald Arnalt has given eight-figure amounts to charity. Someone who reasoned, “Arnalt is so rich, surely he’ll spare a little for the less fortunate” would in fact end up making a correct prediction about Bernald Arnalt’s behavior!
Obviously, it would not be valid to conclude ”… and therefore superintelligences will, too”, because superintelligences and Bernald Arnalt are very different things. But you chose the illustrative example! As a matter of local validity, It doesn’t seem like a big ask for illustrative examples to in fact illustrate what what they purport to.
Yes, I agree that this conditional statement is obvious. But while we’re on the general topic of whether Earth will be kept alive, it would be nice to see some engagement with Paul Christiano’s arguments (which Carl Shulman “agree[s] with [...] approximately in full”) that superintelligences might care about what happens to you a little bit, articulated in a comment thread on Soares’s “But Why Would the AI Kill Us?” and another thread on “Cosmopolitan Values Don’t Come Free”,
Nate Soares engaged extensively with this in reasonable-seeming ways that I’d thus expect Eliezer Yudkowsky to mostly agree with. Mostly it seems like a disagreement where Paul Christiano doesn’t really have a model of what realistically causes good outcomes and so he’s really uncertain, whereas Soares has a proper model and so is less uncertain.
But you can’t really argue with someone whose main opinion is “I don’t know”, since “I don’t know” is just garbage. He’s gotta at least present some new powerful observable forces, or reject some of the forces presented, rather than postulating that maybe there’s an unobserved kindness force that arbitrarily explains all the kindness that we see.
I think the simplest argument to “caring a little” is that there is a difference between “caring a little” and “caring enough”. Let’s say that AI is ready to pay 1$ for your survival. If you live in economy which rapidly disassembles Earth into Dyson swarm, oxygen, protected environment and food are not just stuff lying around, they are complex expensive artifacts and AI is certainly not ready to pay for your O’Neil cylinder to be evacuated into and not ready to pay opportunity costs of not disassembling Earth, so you die.
The other case is difference “caring in general” and “caring ceteris paribus”. It’s possible for AI to prefer, all things equal, world with n+1 happy humans to the world with n happy humans. But really AI wants to implement some particular neuromorphic computation from human brain and, given ability to freely operate, it would tile the world with chips imitating part of human brain.
It’s also not enough for there to be a force that makes the AI care a little about human thriving. It’s also necessary for this force to not make the AI care a lot about some extremely distorted version of you; as then we get into concepts like tiny molecular smiles, locking you in a pleasuredome, etc..
If you’re not supposed to end up as a pet of the AI, then it seems like it needs to respect property rights, but that is easier said than done when considering massive differences in ability. Consider: would we even be able to have a society where we respected property rights of dogs? It seems like it would be difficult. How could we confirm a transaction without the dogs being defrauded of everything?
Probably an intermediate solution would be to just accept humans will be defrauded of everything very rapidly but then give us universal basic income or something so our failures aren’t permanent setbacks. But it’s unclear how to respect the freedom of funding while preventing people from funding terrorists and not encouraging people to get lost in junk. That’s really where the issue of values becomes hard.
Yes, I agree that this conditional statement is obvious. But while we’re on the general topic of whether Earth will be kept alive, it would be nice to see some engagement with Paul Christiano’s arguments (which Carl Shulman “agree[s] with [...] approximately in full”) that superintelligences might care about what happens to you a little bit, articulated in a comment thread on Soares’s “But Why Would the AI Kill Us?” and another thread on “Cosmopolitan Values Don’t Come Free”,
The reason I think this is important is because “[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates”: if you write 3000 words inveighing against people who think comparative advantage means that horses can’t get sent to glue factories, that doesn’t license the conclusion that superintelligence Will Definitely Kill You if there are other reasons why superintelligence Might Not Kill You that don’t stop being real just because very few people have the expertise to formulate them carefully.
(An important caveat: the possibility of superintelligences having human-regarding preferences may or may not be comforting: as a fictional illustration of some relevant considerations, the Superhappies in “Three Worlds Collide” cared about the humans to some extent, but not in the specific way that the humans wanted to be cared for.)
Now, you are on the record stating that you “sometimes mention the possibility of being stored and sold to aliens a billion years later, which seems to [you] to validly incorporate most all the hopes and fears and uncertainties that should properly be involved, without getting into any weirdness that [you] don’t expect Earthlings to think about validly.” If that’s all you have to say on the matter, fine. (Given the premise of AIs spending some fraction of their resources on human-regarding preferences, I agree that uploads look a lot more efficient than literally saving the physical Earth!)
But you should take into account that if you’re strategically dumbing down your public communication in order to avoid topics that you don’t trust Earthlings to think about validly—and especially if you have a general policy of systematically ignoring counterarguments that it would be politically inconvenient for you to address—you should expect that Earthlings who are trying to achieve the map that reflects the territory will correspondingly attach much less weight to your words, because we have to take into account how hard you’re trying to epistemically screw us over by filtering the evidence.
Bernald Arnalt has given eight-figure amounts to charity. Someone who reasoned, “Arnalt is so rich, surely he’ll spare a little for the less fortunate” would in fact end up making a correct prediction about Bernald Arnalt’s behavior!
Obviously, it would not be valid to conclude ”… and therefore superintelligences will, too”, because superintelligences and Bernald Arnalt are very different things. But you chose the illustrative example! As a matter of local validity, It doesn’t seem like a big ask for illustrative examples to in fact illustrate what what they purport to.
Nate Soares engaged extensively with this in reasonable-seeming ways that I’d thus expect Eliezer Yudkowsky to mostly agree with. Mostly it seems like a disagreement where Paul Christiano doesn’t really have a model of what realistically causes good outcomes and so he’s really uncertain, whereas Soares has a proper model and so is less uncertain.
But you can’t really argue with someone whose main opinion is “I don’t know”, since “I don’t know” is just garbage. He’s gotta at least present some new powerful observable forces, or reject some of the forces presented, rather than postulating that maybe there’s an unobserved kindness force that arbitrarily explains all the kindness that we see.
I think the simplest argument to “caring a little” is that there is a difference between “caring a little” and “caring enough”. Let’s say that AI is ready to pay 1$ for your survival. If you live in economy which rapidly disassembles Earth into Dyson swarm, oxygen, protected environment and food are not just stuff lying around, they are complex expensive artifacts and AI is certainly not ready to pay for your O’Neil cylinder to be evacuated into and not ready to pay opportunity costs of not disassembling Earth, so you die.
The other case is difference “caring in general” and “caring ceteris paribus”. It’s possible for AI to prefer, all things equal, world with n+1 happy humans to the world with n happy humans. But really AI wants to implement some particular neuromorphic computation from human brain and, given ability to freely operate, it would tile the world with chips imitating part of human brain.
It’s also not enough for there to be a force that makes the AI care a little about human thriving. It’s also necessary for this force to not make the AI care a lot about some extremely distorted version of you; as then we get into concepts like tiny molecular smiles, locking you in a pleasuredome, etc..
If you’re not supposed to end up as a pet of the AI, then it seems like it needs to respect property rights, but that is easier said than done when considering massive differences in ability. Consider: would we even be able to have a society where we respected property rights of dogs? It seems like it would be difficult. How could we confirm a transaction without the dogs being defrauded of everything?
Probably an intermediate solution would be to just accept humans will be defrauded of everything very rapidly but then give us universal basic income or something so our failures aren’t permanent setbacks. But it’s unclear how to respect the freedom of funding while preventing people from funding terrorists and not encouraging people to get lost in junk. That’s really where the issue of values becomes hard.