Most people care a lot more about whether they and their loved ones (and their society/humanity) will in fact be killed than whether they will control the cosmic endowment. Eliezer has been going on podcasts saying that with near-certainty we will not see really superintelligent AGI because we will all be killed, and many people interpret your statements as saying that. And Paul’s arguments do cut to the core of a lot of the appeals to humans keeping around other animals.
If it is false that we will almost certainly be killed (which I think is right, I agree with Paul’s comment approximately in full), and one believes that, then saying we will almost certainly be killed would be deceptive rhetoric that could scare people who care less about the cosmic endowment into worrying more about AI risk. Since you’re saying you care much more about the cosmic endowment, and in practice this talk is shaped to have the effect of persuading people to do the thing you would prefer it’s quite important whether you believe the claim for good epistemic reasons. That is important to disclaiming the hypothesis that this is something being misleadingly presented or drifted into because of its rhetorical convenience without vetting it (where you would vet it if it were rhetorically inconvenient).
I think being right on this is important for the same sorts of reasons climate activists should not falsely say that failing to meet the latest emissions target on time will soon thereafter kill 100% of humans.
This thread continues to seem to me to be off-topic. My main takeaway so far is that the post was not clear enough about how it’s answering the question “why does an AI that is indifferent to you, kill you?”. In attempts to make this clearer, I have added the following to the beginning of the post:
This post is an answer to the question of why an AI that was truly indifferent to humanity (and sentient life more generally), would destroy all Earth-originated sentient life.
I acknowledge (for the third time, with some exasperation) that this point alone is not enough to carry the argument that we’ll likely all die from AI, and that a key further piece of argument is that AI is not likely to care about us at all. I have tried to make it clear (in the post, and in comments above) that this post is not arguing that point, while giving pointers that curious people can use to get a sense of why I believe this. I have no interest in continuing that discussion here.
I don’t buy your argument that my communication is misleading. Hopefully that disagreement is mostly cleared up by the above.
In case not, to clarify further: My reason for not thinking in great depth about this issue is that I am mostly focused on making the future of the physical universe wonderful. Given the limited attention I have spent on these questions, though, it looks to me like there aren’t plausible continuations of humanity that don’t route through something that I count pretty squarely as “death” (like, “the bodies of you and all your loved ones are consumed in an omnicidal fire, thereby sending you to whatever afterlives are in store” sort of “death”).
I acknowledge that I think various exotic afterlives are at least plausible (anthropic immortality, rescue simulations, alien restorations, …), and haven’t felt a need to caveat this.
Insofar as you’re arguing that I shouldn’t say “and then humanity will die” when I mean something more like “and then humanity will be confined to the solar system, and shackled forever to a low tech level”, I agree, and I assign that outcome low probability (and consider that disagreement to be off-topic here).
(Separately, I dispute the claim that most humans care mainly about themselves and their loved ones having pleasant lives from here on out. I’d agree that many profess such preferences when asked, but my guess is that they’d realize on reflection that they were mistaken.)
Insofar as you’re arguing that it’s misleading for me to say “and then humanity will die” without caveating “(insofar as anyone can die, in this wide multiverse)”, I counter that the possibility of exotic scenarios like anthropic immortality shouldn’t rob me of the ability to warn of lethal dangers (and that this usage of “you’ll die” has a long and storied precedent, given that most humans profess belief in afterlives, and still warn their friends against lethal dangers without such caveats).
I assign that outcome low probability (and consider that disagreement to be off-topic here).
Thank you for the clarification. In that case my objections are on the object-level.
This post is an answer to the question of why an AI that was truly indifferent to humanity (and sentient life more generally), would destroy all Earth-originated sentient life.
This does exclude random small terminal valuations of things involving humans, but leaves out the instrumental value for trade and science, uncertainty about how other powerful beings might respond. I know you did an earlier post with your claims about trade for some human survival, but as Paul says above it’s a huge point for such small shares of resources. Given that kind of claim much of Paul’s comment still seems very on-topic (e.g. hsi bullet point .
Insofar as you’re arguing that I shouldn’t say “and then humanity will die” when I mean something more like “and then humanity will be confined to the solar system, and shackled forever to a low tech level”, I agree, and
Yes, close to this (although more like ‘gets a small resource share’ than necessarily confinement to the solar system or low tech level, both of which can also be avoided at low cost). I think it’s not off-topic given all the claims made in the post and the questions it purports to respond to. E.g. sections of the post purport to respond to someone arguing from how cheap it would be to leave us alive (implicitly allowing very weak instrumental reasons to come into play, such as trade), or making general appeals to ‘there could be a reason.’
Separate small point:
And disassembling us for spare parts sounds much easier than building pervasive monitoring that can successfully detect and shut down human attempts to build a competing superintelligence, even as the humans attempt to subvert those monitoring mechanisms. Why leave clever antagonists at your rear?
The costs to sustain multiple superintelligent AI police per human (which can double in supporting roles for a human habitat/retirement home and controlling the local technical infrastructure) is not large relative to the metabolic costs of the humans, let alone a trillionth of the resources. It just means some replications of the same impregnable AI+robotic capabilities ubiquitous elsewhere in the AI society.
It’s possible that the paperclipper that kills us will decide to scan human brains and save the scans, just in case it runs into an advanced alien civilization later that wants to trade some paperclips for the scans. And there may well be friendly aliens out there who would agree to this trade, and then give us a little pocket of their universe-shard to live in, as we might do if we build an FAI and encounter an AI that wiped out its creator-species. But that’s not us trading with the AI; that’s us destroying all of the value in our universe-shard and getting ourselves killed in the process, and then banking on the competence and compassion of aliens.
[...]
Remember that it still needs to get more of what it wants, somehow, on its own superintelligent expectations. Someone still needs to pay it. There aren’t enough simulators above us that care enough about us-in-particular to pay in paperclips. There are so many things to care about! Why us, rather than giant gold obelisks? The tiny amount of caring-ness coming down from the simulators is spread over far too many goals; it’s not clear to me that “a star system for your creators” outbids the competition, even if star systems are up for auction.
Maybe some friendly aliens somewhere out there in the Tegmark IV multiverse have so much matter and such diminishing marginal returns on it that they’re willing to build great paperclip-piles (and gold-obelisk totems and etc. etc.) for a few spared evolved-species. But if you’re going to rely on the tiny charity of aliens to construct hopeful-feeling scenarios, why not rely on the charity of aliens who anthropically simulate us to recover our mind-states… or just aliens on the borders of space in our universe, maybe purchasing some stored human mind-states from the UFAI (with resources that can be directed towards paperclips specifically, rather than a broad basket of goals)?
Might aliens purchase our saved mind-states and give us some resources to live on? Maybe. But this wouldn’t be because the paperclippers run some fancy decision theory, or because even paperclippers have the spirit of cooperation in their heart. It would be because there are friendly aliens in the stars, who have compassion for us even in our recklessness, and who are willing to pay in paperclips.
(To the above, I personally would add that this whole genre of argument reeks, to me, essentially of giving up, and tossing our remaining hopes onto a Hail Mary largely insensitive to our actual actions in the present. Relying on helpful aliens is what you do once you’re entirely out of hope about solving the problem on the object level, and doesn’t strike me as a very dignified way to go down!)
Most people care a lot more about whether they and their loved ones (and their society/humanity) will in fact be killed than whether they will control the cosmic endowment. Eliezer has been going on podcasts saying that with near-certainty we will not see really superintelligent AGI because we will all be killed, and many people interpret your statements as saying that. And Paul’s arguments do cut to the core of a lot of the appeals to humans keeping around other animals.
If it is false that we will almost certainly be killed (which I think is right, I agree with Paul’s comment approximately in full), and one believes that, then saying we will almost certainly be killed would be deceptive rhetoric that could scare people who care less about the cosmic endowment into worrying more about AI risk. Since you’re saying you care much more about the cosmic endowment, and in practice this talk is shaped to have the effect of persuading people to do the thing you would prefer it’s quite important whether you believe the claim for good epistemic reasons. That is important to disclaiming the hypothesis that this is something being misleadingly presented or drifted into because of its rhetorical convenience without vetting it (where you would vet it if it were rhetorically inconvenient).
I think being right on this is important for the same sorts of reasons climate activists should not falsely say that failing to meet the latest emissions target on time will soon thereafter kill 100% of humans.
This thread continues to seem to me to be off-topic. My main takeaway so far is that the post was not clear enough about how it’s answering the question “why does an AI that is indifferent to you, kill you?”. In attempts to make this clearer, I have added the following to the beginning of the post:
I acknowledge (for the third time, with some exasperation) that this point alone is not enough to carry the argument that we’ll likely all die from AI, and that a key further piece of argument is that AI is not likely to care about us at all. I have tried to make it clear (in the post, and in comments above) that this post is not arguing that point, while giving pointers that curious people can use to get a sense of why I believe this. I have no interest in continuing that discussion here.
I don’t buy your argument that my communication is misleading. Hopefully that disagreement is mostly cleared up by the above.
In case not, to clarify further: My reason for not thinking in great depth about this issue is that I am mostly focused on making the future of the physical universe wonderful. Given the limited attention I have spent on these questions, though, it looks to me like there aren’t plausible continuations of humanity that don’t route through something that I count pretty squarely as “death” (like, “the bodies of you and all your loved ones are consumed in an omnicidal fire, thereby sending you to whatever afterlives are in store” sort of “death”).
I acknowledge that I think various exotic afterlives are at least plausible (anthropic immortality, rescue simulations, alien restorations, …), and haven’t felt a need to caveat this.
Insofar as you’re arguing that I shouldn’t say “and then humanity will die” when I mean something more like “and then humanity will be confined to the solar system, and shackled forever to a low tech level”, I agree, and I assign that outcome low probability (and consider that disagreement to be off-topic here).
(Separately, I dispute the claim that most humans care mainly about themselves and their loved ones having pleasant lives from here on out. I’d agree that many profess such preferences when asked, but my guess is that they’d realize on reflection that they were mistaken.)
Insofar as you’re arguing that it’s misleading for me to say “and then humanity will die” without caveating “(insofar as anyone can die, in this wide multiverse)”, I counter that the possibility of exotic scenarios like anthropic immortality shouldn’t rob me of the ability to warn of lethal dangers (and that this usage of “you’ll die” has a long and storied precedent, given that most humans profess belief in afterlives, and still warn their friends against lethal dangers without such caveats).
Thank you for the clarification. In that case my objections are on the object-level.
This does exclude random small terminal valuations of things involving humans, but leaves out the instrumental value for trade and science, uncertainty about how other powerful beings might respond. I know you did an earlier post with your claims about trade for some human survival, but as Paul says above it’s a huge point for such small shares of resources. Given that kind of claim much of Paul’s comment still seems very on-topic (e.g. hsi bullet point .
Yes, close to this (although more like ‘gets a small resource share’ than necessarily confinement to the solar system or low tech level, both of which can also be avoided at low cost). I think it’s not off-topic given all the claims made in the post and the questions it purports to respond to. E.g. sections of the post purport to respond to someone arguing from how cheap it would be to leave us alive (implicitly allowing very weak instrumental reasons to come into play, such as trade), or making general appeals to ‘there could be a reason.’
Separate small point:
The costs to sustain multiple superintelligent AI police per human (which can double in supporting roles for a human habitat/retirement home and controlling the local technical infrastructure) is not large relative to the metabolic costs of the humans, let alone a trillionth of the resources. It just means some replications of the same impregnable AI+robotic capabilities ubiquitous elsewhere in the AI society.
RE: decision theory w.r.t how “other powerful beings” might respond—I really do think Nate has already argued this, and his arguments continue to seem more compelling to me than the the opposition’s. Relevant quotes include:
(To the above, I personally would add that this whole genre of argument reeks, to me, essentially of giving up, and tossing our remaining hopes onto a Hail Mary largely insensitive to our actual actions in the present. Relying on helpful aliens is what you do once you’re entirely out of hope about solving the problem on the object level, and doesn’t strike me as a very dignified way to go down!)