A variety of people profess to consider this desirable if it leads to powerful intelligent life filling the universe with higher probability or greater speed. I would bet that there are stable equilibria that can be reached with arguments.
Carl says that a variety of people profess to consider it desirable that present-day humans get disassembled “if it leads to powerful intelligent life filling the universe with higher probability or greater speed.”
Well, yeah, I’m not surprised. Any system of valuing things in which every life, present and future, has the same utility as every other life will lead to that conclusion because turning the existing living beings and their habitat into computronium, von-Neumann probes, etc, to hasten the start of the colonization of the light cone by a few seconds will have positive expected marginal utility according to the system of valuing things.
...which won’t happen if the computronium is the most important thing and uploading existing minds would slow it down. The AI might upload some humans to get their cooperation during the early stages of takeoff, but it wouldn’t necessarily keep those uploads running once it no longer depended on humans, if the same resources could be used more efficiently for itself.
To get my cooperation, at least, it would have to credibly precommit that it wouldn’t just turn my simulation off after it no longer needs me. (Of course, the meaning of the word “credibly” shifts somewhat when we’re talking about a superintelligence trying to “prove” something to a human.)
A variety of people profess to consider this desirable if it leads to powerful intelligent life filling the universe with higher probability or greater speed. I would bet that there are stable equilibria that can be reached with arguments.
Carl says that a variety of people profess to consider it desirable that present-day humans get disassembled “if it leads to powerful intelligent life filling the universe with higher probability or greater speed.”
Well, yeah, I’m not surprised. Any system of valuing things in which every life, present and future, has the same utility as every other life will lead to that conclusion because turning the existing living beings and their habitat into computronium, von-Neumann probes, etc, to hasten the start of the colonization of the light cone by a few seconds will have positive expected marginal utility according to the system of valuing things.
That could still be a great thing for us provided that current human minds were uploaded into the resulting computronium explosion.
...which won’t happen if the computronium is the most important thing and uploading existing minds would slow it down. The AI might upload some humans to get their cooperation during the early stages of takeoff, but it wouldn’t necessarily keep those uploads running once it no longer depended on humans, if the same resources could be used more efficiently for itself.
To get my cooperation, at least, it would have to credibly precommit that it wouldn’t just turn my simulation off after it no longer needs me. (Of course, the meaning of the word “credibly” shifts somewhat when we’re talking about a superintelligence trying to “prove” something to a human.)