It’s not clear to me how you’re mapping this problem to the trolley problem.
To me the Trolley problem is largely about how much you’re willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard’s power to create more human-like desires.
The main difference between the cases that I see in the Trolley problems are “to what extent is the person you’re killing already in danger?” Being already on a track is pretty inherently dangerous. Being on a bridge in a mine isn’t as dangerous. Wandering into a hospital with healthy organs isn’t inherently dangerous at all.
Suppose the house elves were created just wanting to do chores. Would it be moral to leave them like that if you could make them more human? What if they had once been more human and you were now “reverting” them?
Ah-ha. Okay. I hadn’t thought of the trolley problem in those terms before. It’s not very relevant to how I’m thinking, though; I’m thinking in terms of what actions are acceptable from a given starting point, not what end states are acceptable.
As to house elves: I don’t consider humanike values to be intrinsically better than other values in the relevant sense—I disagree with Clippy about the ideal state of the world, and am likely to come into conflict with em in relevant cases, but if the world were arranged in such a way that beings with clippylike values could exist without being in conflict with beings with other values, I would have no objection to said being existing, and that’s basically the case with house elves. (And I don’t think it’s intrinsically wrong for Clippy to exist, just problematic enough that there are reasonable objections.)
I would consider causing house elves to have humanlike values equally problematic as causing humans to have house-elf-like values, regardless of whether the house elves were human to begin with, assuming that house elves are satisfied with their values and do not actively want to have humanlike values. Two wrongs don’t make a right.
To me the Trolley problem is largely about how much you’re willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard’s power to create more human-like desires.
The main difference between the cases that I see in the Trolley problems are “to what extent is the person you’re killing already in danger?” Being already on a track is pretty inherently dangerous. Being on a bridge in a mine isn’t as dangerous. Wandering into a hospital with healthy organs isn’t inherently dangerous at all.
Suppose the house elves were created just wanting to do chores. Would it be moral to leave them like that if you could make them more human? What if they had once been more human and you were now “reverting” them?
Ah-ha. Okay. I hadn’t thought of the trolley problem in those terms before. It’s not very relevant to how I’m thinking, though; I’m thinking in terms of what actions are acceptable from a given starting point, not what end states are acceptable.
As to house elves: I don’t consider humanike values to be intrinsically better than other values in the relevant sense—I disagree with Clippy about the ideal state of the world, and am likely to come into conflict with em in relevant cases, but if the world were arranged in such a way that beings with clippylike values could exist without being in conflict with beings with other values, I would have no objection to said being existing, and that’s basically the case with house elves. (And I don’t think it’s intrinsically wrong for Clippy to exist, just problematic enough that there are reasonable objections.)
I would consider causing house elves to have humanlike values equally problematic as causing humans to have house-elf-like values, regardless of whether the house elves were human to begin with, assuming that house elves are satisfied with their values and do not actively want to have humanlike values. Two wrongs don’t make a right.