Most people wouldn’t want to be turned into paperclips?
Of course not, since they haven’t yet heard the argument that would make they want to. All the moral arguments we’ve heard so far have been invented by humans, and we just aren’t that inventive. Even so, we have Voluntary Human Extinction Movement.
Wei, suppose I want to help someone. How ought I to do so?
Is the idea here that humans end up anywhere depending on what arguments they hear in what order, without the overall map of all possible argument orders displaying any sort of concentration in one or more clusters where lots of endpoints would light up, or any sort of coherency that could be extracted out of it?
Wei, suppose I want to help someone. How ought I to do so?
I don’t know. (I mean I don’t know how to do it in general. There are some specific situations where I do know how to help, but lots more where I don’t.)
Is the idea here that humans end up anywhere depending on what arguments they hear in what order, without the overall map of all possible argument orders displaying any sort of concentration in one or more clusters where lots of endpoints would light up, or any sort of coherency that could be extracted out of it?
Yes. Or another possibility is that the overall map of all possible argument orders does display some sort of concentration, but that concentration is morally irrelevant. Human minds were never “designed” to hear all possible moral arguments, so where the concentration occurs is accidental, and perhaps horrifying from our current perspective. (Suppose the concentration turns out to be voluntary extinction or something worse, would you bite the bullet and let the FAI run with it?)
A variety of people profess to consider this desirable if it leads to powerful intelligent life filling the universe with higher probability or greater speed. I would bet that there are stable equilibria that can be reached with arguments.
Carl says that a variety of people profess to consider it desirable that present-day humans get disassembled “if it leads to powerful intelligent life filling the universe with higher probability or greater speed.”
Well, yeah, I’m not surprised. Any system of valuing things in which every life, present and future, has the same utility as every other life will lead to that conclusion because turning the existing living beings and their habitat into computronium, von-Neumann probes, etc, to hasten the start of the colonization of the light cone by a few seconds will have positive expected marginal utility according to the system of valuing things.
...which won’t happen if the computronium is the most important thing and uploading existing minds would slow it down. The AI might upload some humans to get their cooperation during the early stages of takeoff, but it wouldn’t necessarily keep those uploads running once it no longer depended on humans, if the same resources could be used more efficiently for itself.
To get my cooperation, at least, it would have to credibly precommit that it wouldn’t just turn my simulation off after it no longer needs me. (Of course, the meaning of the word “credibly” shifts somewhat when we’re talking about a superintelligence trying to “prove” something to a human.)
Most people wouldn’t want to be turned into paperclips?
Of course not, since they haven’t yet heard the argument that would make they want to. All the moral arguments we’ve heard so far have been invented by humans, and we just aren’t that inventive. Even so, we have Voluntary Human Extinction Movement.
Wei, suppose I want to help someone. How ought I to do so?
Is the idea here that humans end up anywhere depending on what arguments they hear in what order, without the overall map of all possible argument orders displaying any sort of concentration in one or more clusters where lots of endpoints would light up, or any sort of coherency that could be extracted out of it?
I don’t know. (I mean I don’t know how to do it in general. There are some specific situations where I do know how to help, but lots more where I don’t.)
Yes. Or another possibility is that the overall map of all possible argument orders does display some sort of concentration, but that concentration is morally irrelevant. Human minds were never “designed” to hear all possible moral arguments, so where the concentration occurs is accidental, and perhaps horrifying from our current perspective. (Suppose the concentration turns out to be voluntary extinction or something worse, would you bite the bullet and let the FAI run with it?)
A variety of people profess to consider this desirable if it leads to powerful intelligent life filling the universe with higher probability or greater speed. I would bet that there are stable equilibria that can be reached with arguments.
Carl says that a variety of people profess to consider it desirable that present-day humans get disassembled “if it leads to powerful intelligent life filling the universe with higher probability or greater speed.”
Well, yeah, I’m not surprised. Any system of valuing things in which every life, present and future, has the same utility as every other life will lead to that conclusion because turning the existing living beings and their habitat into computronium, von-Neumann probes, etc, to hasten the start of the colonization of the light cone by a few seconds will have positive expected marginal utility according to the system of valuing things.
That could still be a great thing for us provided that current human minds were uploaded into the resulting computronium explosion.
...which won’t happen if the computronium is the most important thing and uploading existing minds would slow it down. The AI might upload some humans to get their cooperation during the early stages of takeoff, but it wouldn’t necessarily keep those uploads running once it no longer depended on humans, if the same resources could be used more efficiently for itself.
To get my cooperation, at least, it would have to credibly precommit that it wouldn’t just turn my simulation off after it no longer needs me. (Of course, the meaning of the word “credibly” shifts somewhat when we’re talking about a superintelligence trying to “prove” something to a human.)