Suppose I program a robot to enumerate many possible courses of action, determine how each one will affect every person involved, take a weighted average and then carry out the course of action which produces the most overall happiness. But I deliberately select a formula which will decide to kill you. Suppose the robot is sophisticated enough to suffer. Is it right to make the robot suffer? Is it right to make me suffer? Does it make a difference whether the key to the formula is a weird weighting or an integer underflow?
Suppose I program a robot to enumerate many possible courses of action, determine how each one will affect every person involved, take a weighted average and then carry out the course of action which produces the most overall happiness. But I deliberately select a formula which will decide to kill you. Suppose the robot is sophisticated enough to suffer. Is it right to make the robot suffer? Is it right to make me suffer? Does it make a difference whether the key to the formula is a weird weighting or an integer underflow?