These ethical questions become relevant if we’re implementing a Friendly AI, and they are only of academic interest if I interpret them literally as a question about me.
If it’s a question about me, I’d probably go with the dust specs. A small fraction of those people will have time to get to me, and of those, none of those people are likely to bother me if it’s just a dust speck. If I were to advocate the torture, the victim or someone who knows him might find me and try to get revenge. I just gave you a data point about the psychology of one unmodified human, which is relatively useless, so I don’t think that’s the question you really wanted answered.
Perhaps the question is really what a non-buggy omnipotent Friendly AI would do. If it has been constructed to care equally about that absurd number of people, IMO it should choose torture. If it’s not omnipotent, then it has to consider revenge of the victim, so the correct answer depends on the details of how omnipotent it isn’t.
These ethical questions become relevant if we’re implementing a Friendly AI, and they are only of academic interest if I interpret them literally as a question about me.
If it’s a question about me, I’d probably go with the dust specs. A small fraction of those people will have time to get to me, and of those, none of those people are likely to bother me if it’s just a dust speck. If I were to advocate the torture, the victim or someone who knows him might find me and try to get revenge. I just gave you a data point about the psychology of one unmodified human, which is relatively useless, so I don’t think that’s the question you really wanted answered.
Perhaps the question is really what a non-buggy omnipotent Friendly AI would do. If it has been constructed to care equally about that absurd number of people, IMO it should choose torture. If it’s not omnipotent, then it has to consider revenge of the victim, so the correct answer depends on the details of how omnipotent it isn’t.