In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution—not an algorithm that indicates how a person or an AI should decide.
I don’t expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....
In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution—not an algorithm that indicates how a person or an AI should decide.
I don’t expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....