The problem with rewiring someone against their will has to do with the second issue I mentioned, not the first one—changing their preferences and their utility function. If you’re creating something from scratch, I don’t see how that can be an issue without arbitrarily privileging some set of values as ‘correct’ - if you’re creating something from scratch, there are no pre-existing values for the new values to be in conflict with. (The first issue doesn’t seem to raise the same problems: I think I would consider it okay, or at least ‘questionable’ rather than ‘clearly bad’, to re-wire someone to enjoy doing things that they would be doing anyway to achieve their own goals, if you were sufficiently sure that you actually understood their goals; however, I don’t think that humans can be sufficiently sure of other humans’ goals for that.)
It’s not clear to me how you’re mapping this problem to the trolley problem. This is probably because I have some personal stuff going on and am not in very good cognitive shape, but regardless of the cause, if you want to talk about it in those terms I’d appreciate a clearer explanation.
It’s not clear to me how you’re mapping this problem to the trolley problem.
To me the Trolley problem is largely about how much you’re willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard’s power to create more human-like desires.
The main difference between the cases that I see in the Trolley problems are “to what extent is the person you’re killing already in danger?” Being already on a track is pretty inherently dangerous. Being on a bridge in a mine isn’t as dangerous. Wandering into a hospital with healthy organs isn’t inherently dangerous at all.
Suppose the house elves were created just wanting to do chores. Would it be moral to leave them like that if you could make them more human? What if they had once been more human and you were now “reverting” them?
Ah-ha. Okay. I hadn’t thought of the trolley problem in those terms before. It’s not very relevant to how I’m thinking, though; I’m thinking in terms of what actions are acceptable from a given starting point, not what end states are acceptable.
As to house elves: I don’t consider humanike values to be intrinsically better than other values in the relevant sense—I disagree with Clippy about the ideal state of the world, and am likely to come into conflict with em in relevant cases, but if the world were arranged in such a way that beings with clippylike values could exist without being in conflict with beings with other values, I would have no objection to said being existing, and that’s basically the case with house elves. (And I don’t think it’s intrinsically wrong for Clippy to exist, just problematic enough that there are reasonable objections.)
I would consider causing house elves to have humanlike values equally problematic as causing humans to have house-elf-like values, regardless of whether the house elves were human to begin with, assuming that house elves are satisfied with their values and do not actively want to have humanlike values. Two wrongs don’t make a right.
The problem with rewiring someone against their will has to do with the second issue I mentioned, not the first one—changing their preferences and their utility function. If you’re creating something from scratch, I don’t see how that can be an issue without arbitrarily privileging some set of values as ‘correct’ - if you’re creating something from scratch, there are no pre-existing values for the new values to be in conflict with. (The first issue doesn’t seem to raise the same problems: I think I would consider it okay, or at least ‘questionable’ rather than ‘clearly bad’, to re-wire someone to enjoy doing things that they would be doing anyway to achieve their own goals, if you were sufficiently sure that you actually understood their goals; however, I don’t think that humans can be sufficiently sure of other humans’ goals for that.)
It’s not clear to me how you’re mapping this problem to the trolley problem. This is probably because I have some personal stuff going on and am not in very good cognitive shape, but regardless of the cause, if you want to talk about it in those terms I’d appreciate a clearer explanation.
To me the Trolley problem is largely about how much you’re willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard’s power to create more human-like desires.
The main difference between the cases that I see in the Trolley problems are “to what extent is the person you’re killing already in danger?” Being already on a track is pretty inherently dangerous. Being on a bridge in a mine isn’t as dangerous. Wandering into a hospital with healthy organs isn’t inherently dangerous at all.
Suppose the house elves were created just wanting to do chores. Would it be moral to leave them like that if you could make them more human? What if they had once been more human and you were now “reverting” them?
Ah-ha. Okay. I hadn’t thought of the trolley problem in those terms before. It’s not very relevant to how I’m thinking, though; I’m thinking in terms of what actions are acceptable from a given starting point, not what end states are acceptable.
As to house elves: I don’t consider humanike values to be intrinsically better than other values in the relevant sense—I disagree with Clippy about the ideal state of the world, and am likely to come into conflict with em in relevant cases, but if the world were arranged in such a way that beings with clippylike values could exist without being in conflict with beings with other values, I would have no objection to said being existing, and that’s basically the case with house elves. (And I don’t think it’s intrinsically wrong for Clippy to exist, just problematic enough that there are reasonable objections.)
I would consider causing house elves to have humanlike values equally problematic as causing humans to have house-elf-like values, regardless of whether the house elves were human to begin with, assuming that house elves are satisfied with their values and do not actively want to have humanlike values. Two wrongs don’t make a right.