What is Evil about creating House Elves?
Edit: This is old material. It may be out of date.
I’m talking about the fictional race of House Elves from the Harry Potter universe first written about by J. K. Rowling and then uplifted in a grand act of fan-fiction by Elizer Yudkowsky. Unless severely mistreated they enjoy servitude to their masters (or more accurately the current residents of the homes they are binded to), this is also enforced by magical means since they must follow the letter if not the spirit of their master’s direct order.
Overall treating House Elves the way they would like to be treated appears more or less sensible and don’t feel like debating this if people don’t disagree. Changing agents without their consent or knowledge seems obviously wrong, so turning someone into a servant creatures seem intuitively wrong. I can also understand that many people would mind their descendants being modified in such a fashion, perhaps their dis-utility is enough to offset the utility of their modified descendants. However how true is this of distant descendants that only share passing resemblance? I think a helpful reminder of scale might be our own self domestication.
Assuming one created elf like creatures ex nihilo, not as slightly modified versions of a existing species why would one not want to bring a mind into existence that would value its own existence and benefits you, as long as the act of creation or their existence in itself does not represents huge enough dis-utility? This seems somewhat related to the argument Robin Hanson once made that any creatures that can pay for their own existance and would value their own existance should be created.
I didn’t mention this in the many HP fan fiction threads because I want a more general debate on the treatment and creation of such a class of agents.
Edit: Clearly if the species or class contains exceptions there should be ways for them to pursue their differing values.
- ChatGPT Suggests Listening To Russell & Yudkowsky by 4 Apr 2023 0:30 UTC; 7 points) (
- 4 Jul 2012 5:27 UTC; 5 points) 's comment on Rationality Quotes July 2012 by (
- 6 Nov 2013 4:02 UTC; 3 points) 's comment on Open Thread, November 1 − 7, 2013 by (
- 7 Oct 2012 21:38 UTC; 1 point) 's comment on Open Thread, October 1-15, 2012 by (
- 15 Dec 2011 12:05 UTC; 0 points) 's comment on Nonperson Predicates by (
The evil in creating house-elves is not that they like doing chores—it is that they suffer and cannot do anything about it.
They are capable of feeling pain (and psychological anguish) but are incapable of defending themselves, avenging wrongs done to them, or even demanding better treatment. They are entirely dependent on specific humans (their house’s family) for any happiness or satisfaction they might receive. They do not have the choice of leaving.
As such, they are perfect victims for abusers. Like human abuse victims, they can be provoked to self-blame and self-harm; but they do not (as far as we are told) have even the option of suicide. Their only possible hopes are to be inherited by a kind master or to entirely unintentionally dissatisfy their masters so much that they are freed—and even this latter appears to cause significant psychological damage in some cases.
Creating house-elves (as they are presented in the novels) is not like creating servile robots. It is not like creating a being that enjoys the thought of being killed and eaten. It is like imbuing a punching-bag with the ability not only to feel pain, but to contemplate the horror of its lot in life.
Huh. This line of thinking hadn’t even occurred to me. Very insightful, and it actually changed my mind. I wish I could vote it up more than once.
Presumably, the mythical sentient pain-feeling creature would merely assign high utility to serving its master. Given enough pain/dis-utility, it would move on to a different master. As long as this were allowed, your horrifying, sentient, pain-feeling punching bag would not arise.
Edit: But I see what you mean: canon house-elves are sentient punching bags.
I think the major issue House Elves create has to do not with (1) first order reasoning over ethical behavior with other people, nor (2) second order character development aimed at other people (binding pre-commitments to do momentarily irrational things to create certain game theoretic incentive systems with pleasing global properties) but something like (3) “third order moral reasoning” over political processes that include people pre-committed to various irrational character regimes being subject to political speech exhorting people to make similar pre-commitments based on shared traits.
Suppose humans meet “radically different aliens”. First contact stories are a staple of science fiction and they can play out in various ways. Some of the pleasant outcomes involve humans and aliens changing their mind about some stuff so as to recognize each other as “people” and get along.
Now imagine that humans create house elves to be capable of speech and eye contact and geometric verbal reasoning and laughter and so on. Only then does this two-species composite meet “radically different aliens”.
From the alien’s perspective, humans and house elves are nearly identical except for a small fudge, right? Since the humans were OK creating the house elves they must endorse that state as “theoretically acceptable”. Therefore it wouldn’t seem like that large of an imposition to ask the humans to modify themselves that way, right? Maybe the house elves are actually happier? And they’re certainly cheaper to feed!
Suppose the aliens earnestly and naively explained that they would be horrified to have created “house aliens” but they don’t want to judge us, and would like to participate in our culture to some degree. Since “alien shaped slaves” make them queasy, and house elves are too small to do useful jobs on their space ships they want some humans to explain how to modify full human brains to make them good servants. And could be maybe show them how this technology works and give them some prototype volunteer slave humans? Pretty please? What could possibly go wrong? And they pinky-to-tentacle promise not to abuse the technology… not that humans can yet read the way their mandibles and multi-facted eyes squirm around to distinguish between delight at the opportunity to make new friends or gleeful appreciation of having found a new already-half-tamed slave species to upgrade a little and then sell on the galactic market...
And then the people who have been complaining about house elves all this time freak out about the creation of more “intrinsically oppressed” sentient beings. And the people who insisted that house elves weren’t problematic at all don’t want to give those dumb liberals “I told you so” credits so they agree with the aliens that maybe some weird humans can be found somewhere to volunteer so that humans can get some pretty glass beads from the aliens.
And six different philosophers/priests/politicians come up with subtly different takes on the issue and argue amongst each other to make a name for themselves, creating bickering factions of supporters… which makes the jobs of the people currently negotiating with the aliens that much harder, because it obviously gives the aliens a BATNA to try waiting for a regime change and seeing if they can get cooperation from one of the currently-out-of-power factions who are squabbling in favor of full cooperation so that humans can get some of those pretty beads the aliens wear!
Maybe there will never be such aliens? Maybe the general sanity waterline is high enough that no one should worry about this stuff? But if you’re unsure of the answers to those issues, do you really want to fudge the brightline definition of “human” that we got for free from evolution? If you’ve going to fudge it, do you really want to fudge it down instead of up? Maybe keeping the universe of “moral atoms” simple enough for 12 year old kids to understand is helpful to making sure everyone acts morally in the long run?
I think that when bioconservatives talk about disgust and purity in the context of transhumanism, these are some of the pragmatic political issues that are lurking behind their moral sentiments. I don’t generally “side” with the bioconservatives, but this is a reasonably good zombie argument that I’ve been able to reconstruct from their position.
This is really good… now… what if the universe of ‘moral atoms’ is NOT simple enough for 12-year-old kids to understand, but acknowledging that would cripple our efforts to get people to act morally? What if we already know this, but would need to figure out a whole new way of talking about the human condition in order to adopt the findings of psychology into our day-to-day lives?
This is among the best political comments on LW.
I just thought about something. Could it be that we implicitly assign negative value to the existence of human-like minds who’s utilities are just slightly off in a obvious way? Is part of the aversion to wireheading a uncanny valley effect?
Thinking of a creature that evolved or was selected to enjoy being used as a beast of labour by caretakers seems more ok the more different its mind is from ours (lets say for the sake of argument it does have human or superhuman level intelligence).
Why is my sympathy tied to this? Is this a case of my neural circutry being incapable of emulating what it would be like to be such a creature? Or is the failure in the first example, since I try to use my mind to emulate a mind that while otherwise similar has something vital I can’t understand changed?
I’d expect that the Less Wrong community would tend to be unusually independent and averse to hierarchies. But much of humanity is accustomed to obedience, and even regards obedience to authority as a positive good. Some societies are more authoritarian than others, but duty, and humility and respect to superiors are commonly praised as virtues. Disobedience, whining, and malingering are considered bad.
Lots and lots of parents devote a lot of effort to raising their children to be obedient and respectful, not for cynical reasons, but out of love. They deliberately train their children to be servile, not only within the family, but in the context of school and adult life. I take it that some degree of this is traditional throughout the world.
Why does this matter? Suppose I create house elves based on a template of an existing, sapient, non-servant species. However, I do not use individuals as templates; the new house elf servants I create are as unique as any newborn unmodified elf. I can even create them as newborns and raise them appropriately.
What is wrong about this? How does creating new elves who enjoy being servants switch from Good to Evil due to existence of other, causally unrelated ‘wild’ elves, assuming I don’t harm them?
Note: given what Eliezer has shown of the wizarding world, I don’t doubt that whoever created house elves (if anyone did) modified existing grown elves to make the first generation. It seems intuitively simpler somehow when doing things by magic. And of course fits the whole ’accidentally sneezing sapience on a carrot without even THINKING of the moral consequences” theme :-)
I know I’m not the best at doing the background research, so I’m not saying this to criticize, but:
Hasn’t this issue been discussed in the philosophical literature long before there were House Elves in Harry Potter, or even HHG2G? I’m pretty sure it has, and it goes by a standard name, but I don’t know what that name is.
For my part, I don’t see anything wrong with creating House Elves as you’ve described (making them from something that’s not an existing species). If they really have a conscious experience that enjoys serving someone, then it’s anthromorphism on our part to view them as somehow oppressed—resenting oppression simply isn’t a feeling they would have; they weren’t constructed through an evolutionary history involving dominance contests.
I do, however, doubt that they could replicate human servants: for a House Elf to actually understand what you’re ordering it to do requires interpretive assumptions that we would naturally make when giving commands. Are these interpretive assumptions so fundamental to our psychology that House Elves would have to resent their status in order to understand commands as well as a human servant?
Aristotle thought that some people were naturally slaves, and it’s hard to overestimate the historical importance of Aristotle on philosophy. See http://www.suite101.com/content/aristotle-on-slavery—some-people-are-slaves-by-nature-a252374
More recently, George Orwell wrote about the waiters in the Parisian restaurant where he was a dishwasher:
...never be sorry for a waiter. Sometimes when you sit in a restaurant, still stuffing yourself half an hour after closing time, you feel that the tired waiter at your side must surely be despising you. But he is not. He is not thinking as he looks at you, ‘What an overfed lout’; he is thinking, ‘One day, when I have saved enough money, I shall be able to imitate that man.’ He is ministering to a kind of pleasure he thoroughly understands and admires. And that is why waiters are seldom Socialists, have no effective trade union, and will work twelve hours a day — they work fifteen hours, seven days a week, in many cafes. They are snobs, and they find the servile nature of their work rather congenial.
See http://ebooks.adelaide.edu.au/o/orwell/george/o79d/chapter14.html
Lots of people get real pleasure out of pleasing authority. Not out of being abused by authority, but rather by getting little pats on the head. Elves are not far from human at all in that way.
Dogs love to be praised by master, and that’s why we love them. Dogs were bred by humans from wolves to be obedient and submissive, if not sentient.
EDIT/P.S. Slightly off-topic. Epictetus, one of the founders of stoic philosophy, was himself a slave for much of his life. His thought was absolutely a rationalist. As far as I know, he never questioned slavery as an institution.
It may not be exactly what Aristotle had in mind, but I feel obliged to point out that some people do consider themselves to be natural slaves, and I would consider it rude to contradict them about that. If you want one they can be obtained on collarme.com, Or So I’ve Heard.
Well bother. They all seem to be sex slaves. Which is great and all but I was hoping to branch out a bit and recruit some manual labour as well...
sorry if this squicks anyone here, but...
Not all of these people are sex slaves. Many of them are “service slaves”.
I, personally, want to be a service slave, aka “minion”, to someone whose life is dedicated to reducing x-risks.
The main purpose of this arrangement would be to maximize the combined effectiveness of me and my new master, at reducing x-risks. I seem to be no good at running my own life, but I am reasonably well-read on topics related to x-risks, and enjoy doing boring-but-useful things.
And I might as well admit that I would enjoy being a sex slave in addition to being a service slave, but that part of the arrangement is optional. But if you’re interested, I’m bisexual, and into various kinds of kink.
Adelene Dawner has generously offered to help train me to be a good minion. I plan to spend the next few months training with her, to gain some important skills, and to overcome some psychological issues that have been causing me lots of trouble.
I haven’t set up a profile on callarme.com for myself yet.
I was initially creeped out by this comment. Then I read on and got more creeped out. But at some point it got so weird that it turned back on itself and became awesome. It might be my favorite thing I’ve read here.
Us humans are so damned interesting. Not to mention diverse—in ways some people who drone on about diversity can’t even comprehend! And there is something special about a space which not only can make sense of the above comment but can tolerate it. And kudos Peerinfinity, for not being afraid to be seen as different. I would be.
And as a matter of self-observation I got more accepting of the above comment once it became about sex; which must be the result of some kind of liberal, sex-positive memetic infection. Why should I be more tolerant of sexual desires than other life desires?
Upvoted for this.
Relevant details:
Peer is not currently ready to be a minion in any kind of stressful environment. Every authority figure he’s had so far has been of the ‘I won’t respect you unless you stand up for yourself’ type and not very sane, and as a result of that Peer has some very dysfunctional habits when it comes to interacting with such people. I’m already working on fixing that, but I expect that it will take at least a year before he’s able to deal with normal expressions of disapproval in a sane way, voluntarily communicate important information that the recipient might not like, and correctly parse the relative importance of an order given previously which was stated to be very important vs. an order given recently with more emotional weight behind it but no other statement that it’s unusually important. I’m confident he’ll get there, but it’s going to be a while. (This is also why I’m involved at all—the more neurotic aspects of this issue are painful to watch, leading me to want to try to fix it. I’m not sure if I’ll turn out to like having a minion around enough to keep one in the long term—it’s possible, but I also like living alone. In any case, Peer wants to be working for someone who’s actively involved in x-risk prevention, and I’m not.)
We’re also going to be working on practical skills; if anyone thinks they’ll be interested in taking Peer on when he’s done learning how to interact with sane people, it would be good for them to contact me about any skills they think they’d like him to gain. (My plans so far are pretty basic with a geeky twist: Cooking, cleaning, home maintenance, Lojban and/or sign language, computer hardware skills, social skills, organizational skills, etc. Peer already programs.)
Wow. The concept is fascinating.
Once we’ve settled into some kind of routine, I plan to set up a blog where Peer will document his progress. I’ll make a point of announcing it in the discussion section when that happens.
How’s it been going?
Peer turned out to be higher-maintenance than I expected, to the point where it obviously wasn’t going to work out. He went back home, and has been working on self-improvement there, with some amount of apparent success.
I’ve pointed this question out to him in case he wants to give more details.
Actually I saw a few who wanted to do cooking and housework: and one who was willing to give her master the money she earned from her job (provided he fed her).
Ok. Now this sounds promising!
Unfortunately, you’ll probably also have to have sex with them.
Well that’s a chore. :P
the word slavery is used as a blanket term for far too many phenomena. IIRC “slaves” in roman times had regular work hours, could own property, could buy out their contract etc.
True. On the other hand, historically, perhaps the majority of humans since the invention of agriculture have lived lives of what we would regard as continual drudge work, poverty, little or no protection from authority, and very little opportunity for advancement. Until recently, philosophers—and everyone else—took it for granted that this was how the world would always work. Certainly some “slaves” were privileged, but by the same token, huge numbers of nominally “free” people had it worse than canonical house elves. Even today, this is how it is for many, many people.
I wonder how much of bonding people into groups (notably the inculcation of patriotism) is equivalent to trying to create house elves.
I’d suggest that most or all hierarchies of any kind try to make use of the natural house-elfishness inherent in the majority of people.
On the other hand, Eric Hoffer in the True Believer suggested that mass movements—including the more fanatical kind of patriotism—were somewhat different than less dynamic, more established hierarchies, like armies. As I understand it, most combat soldiers are directly motivated more by loyalty to their platoon rather than to Democracy.
It seems to me extremely improbable that this should be the case.
What you’re suggesting is that in the Vast space of all possible minds (that would fit in a head), the only ones who understand what you mean when you tell them to clean the dishes are ones who resent their status. If true that would mean an FAI cannot exist, either.
But such an AI is presumably smart and powerful enough to model you from scratch in enough detail to understand what you want done. Us human beings (and, I would guess, house elves) have to rely on the cruder approach of imagining ourselves in the shoes of the requester, correcting for their situation based on whatever information we have that we think would make our behaviors differ, and then trying to use that model to predict the implied meaning of the other agent’s request. Hence anthropomorphism (house-elf-ism?). And that doesn’t always work too well; see Points of Departure.
The obvious objection here would be if house-elves were not like humans in their ability to model humans; maybe they were created with the necessary brain components to emulate a human being at will or something. But, somehow, that doesn’t strike me as the kind of thing the wizards responsible for their creation would think of.
I think you’re underestimating how much you’re constraining the mindspace when you require that the Elf interpret the soundwaves you’re emitting as a command whose meaning matches the meaning in your mind. Even if it correctly makes the [sound wave] → “clean” conversion, what about its mind makes it know what sense of “clean” you intend here, and what counts as clean?
I’m not asking because I want to make it sound like an insurmountable problem, but to show the problems inherent in making a mind assimilate all the heuristics of a human mind without being too human.
Well, this is certainly true as far as it goes.
But you initially appeared to go further: you seemed to be claiming, not that being able to usefully interpret natural-language commands entails understanding a great deal about the world, but that it entails resenting oppression.
The former claim, I agree with completely; the latter claim I think needs far more support than I can see for it to be worth seriously considering.
So, if you are backing away from that latter claim, or weren’t making it in the first place, well and good. If you are making it, I’m interested in your reasons for believing it.
I’m not sure I can do the topic justice without writing a full article, but here’s my thinking: to sufficiently locate the “hypothesis” of what the Elf should be doing (given a human command), it requires some idea of human motives. Once it internally represents these human motives well enough to perform like a human servant, and sorts its own preferences the same way, then it has become a human in every way except that it distinguishes between humans and Elves, the former of which should be favored.
But once the Elf has absorbed this human-type generating function, then anything that motivates humans can motivate the Elf—including sympathizing with servant groups (literally: “believing that it would be good to help the servants gain rank relative to their masters”), and recognizing themselves, or at least other Elves, as being such a servant group.
You can patch these problems, of course, but each patch either makes the Elf more human (and thus wrong to treat as a servant class) or less effective at serving humans. For example, you could introduce a sort of blind spot that makes it (“mechanically”) output “that’s okay” whenever it observes treatment that it would regard as bad if done to a human. But if this is all that distinguishes Elves from humans, then the Elves start to bear too much cognitive similarity to humans who have undergone psychological abuse.
Well, right, but I’m left with the same question. I mean, yes, I agree that “once it internally represents these human motives well enough to perform like a human servant, and sorts its own preferences the same way,” then everything else you say follows, at least to a rough approximation.
But why need it sort its own preferences the same way humans do?
What seems to underlie this argument is an idea that no cognitive system can understand a human’s values well enough to predict its preferences without sharing those values… that I can’t understand what you want well enough to serve you unless I want the same things.
If that’s true, it’s news to me, so I’m interested in the arguments for it.
For example, it certainly seems possible to model other things in the world without myself becoming those things: I can develop a working model of what pleases and upsets my dog, and what she likely wants me to do when she behaves in certain ways, without myself being pleased or upset or wanting those things. Do you claim that’s an illusion?
That is what I (thought I) was explaining in the following paragraphs. Once it a) knows what humans want, and b) desires acting in a way that matches that preference ranking, it must carve out a portion of the world’s ontology that excludes itself from being a recipient of that service.
It’s not that the Elf would necessarily want to be served like it serves others (although that is a failure mode too); it’s that the Elf would resemble a human well enough at that point that we would have to conclude that it’s wrong to treat it as a servant. The fact that it was made to enjoy it is no longer a defense, for the same reason it’s not a defense to say, “but I’ve already psychologically abused him/her enough that he/she enjoys this abuse!”
That’s not my premise. My premise is (simplifying a bit) that it’s the decision mechanism of a being that primarily determines its moral worth. From this it follows that beings adhering to decision mechanisms of similar enough depth and with similar enough values to humans ought to be regarded as human.
For that reason, I see a tradeoff between effectiveness at replicating humans vs. moral worth. You can make a perfect human replica, but at the cost of obligating yourself to treat it as having the rights of a human. See EY’s discussion of these issues in Nonperson predicates and Can’t Unbirth a Child.
An alien race could indeed model humans well enough to predict us—but at that point they would have to be regarded as being of similar moral worth to us (modulo any dissonance between our values).
OK, I think I understand you now. Thanks for clarifying.
I suppose it boils down to two things.
1) The ethics of wireheading someone else. It doesn’t feel like “real” utility to create something that’s just compelled to act a certain way. Utility doesn’t transfer like that. Whether the elves are fulfilling their own utility function has no necessary bearing on mine, nor does it make anything moral—imagine creating orcs instead, for an example of a clearly immoral utility function.
2) The non-PETA reason for making laws about animal abuse: not because of animals, but because of humans. For example, being cruel to animals is an excellent predictor of being cruel to people, possibly causal, and I doubt the ‘animals’ saying “please sir, hit me again” is going to help any.
wireheading doesn’t feel like wireheading from the inside. ask a meth addict.
I regard all living creatures as already being wireheaded. it’s just an imperfect enough wireheading that we have latitude in our values.
Sorry, I don’t know any meth addicts. But I think your statement may be because “like” and “need” are two different things—it’s possible for the brain to need something but not find it particularly pleasurable (about a third of the smokers I know don’t enjoy smoking in itself, and over 3/4ths all don’t enjoy it once they factor in the cancer). Any decent job of wireheading could avoid this, though.
Wireheading is hijacking the signals that originally help our brains learn / act / adapt / etc, and just hammering on a few of them. It doesn’t make sense to me to say that all life is already wireheaded, because in the beginning there was nothing to hijack.
You are assuming unmodified humans are doing the creating and are also using the agents. Point 2) can be made moot by modifying the users so as to eliminate damage to them.
True. Though this is not the case in the Harry Potter novels.
The house elves seem to be a bit of a shout out to the Ameglian Major Cow. In that case a mind was wire-headed to enjoy something that was pretty clearly bad for it. Arthur had a problem with this, but they argued that if you were going to eat a Cow, it was more moral to wire it to enjoy being eaten.
If you accept that doing chores is just on a continuum with being tortured or eaten, which EY might, then the question is the same as whether it’s Evil to wirehead someone into enjoying being tortured or eaten.
Edit: For clarity, I don’t think I agree with the claim that creating them is “Evil,” but I think I understand why EY would make a character who makes statements like that.
I’m not sure if doing chores in and of itself can be viewed as on a continuum with being tortured, for the purposes of this exercise. Being forced to do chores is considered bad for two reasons (as far as I know): Most people find doing chores to be intrinsically not enjoyable, and most people have other goals that they’d prefer to spend their time pursuing. Being tortured matches at least the first part of that description, and usually matches the second part as well. But for house elves, doing chores is not intrinsically not enjoyable, and it appears that they generally don’t have other significant goals to pursue—and this is their native state; if you create a house elf from nothing, rather than modifying another creature to be house-elf-like, there’s no ‘rewiring’ involved at all. (And the OP made a point of specifying that that’s the case, since it is obviously problematic to rewire a creature in a way that’s opposed to its values.)
It may be useful to also consider the case of masochistic people, for whom things like being whipped are enjoyable: Given that some people seem to just naturally be that way—it’s not caused by trauma or anything, in most cases, unless I’ve really missed something in my research—is it somehow problematic that they exist?
My lower brain agrees with you. My upper brain asks if this is just a trolley problem that puts a high moral value on non-intervention.
Scenario A: Option 1: Create house elves out of nothingness, wire them to enjoy doing chores. Option 2: Create house elves out of nothingness, wire them to enjoy human desires.
Scenario B: Option 1: Take existing house elves with human desires, wire them to enjoy doing chores. Option 2: Leave existing house elves with human desires alone
Is there a non-trolley explanation for why it is immoral to rewire a normal elf, but not immoral to create a new race that is hard-wired for chores? On the trolley questions I was fine with even pushing a supervisor on the tracks, but I couldn’t agree with harvesting a healthy victim for multiple organs.
The problem with rewiring someone against their will has to do with the second issue I mentioned, not the first one—changing their preferences and their utility function. If you’re creating something from scratch, I don’t see how that can be an issue without arbitrarily privileging some set of values as ‘correct’ - if you’re creating something from scratch, there are no pre-existing values for the new values to be in conflict with. (The first issue doesn’t seem to raise the same problems: I think I would consider it okay, or at least ‘questionable’ rather than ‘clearly bad’, to re-wire someone to enjoy doing things that they would be doing anyway to achieve their own goals, if you were sufficiently sure that you actually understood their goals; however, I don’t think that humans can be sufficiently sure of other humans’ goals for that.)
It’s not clear to me how you’re mapping this problem to the trolley problem. This is probably because I have some personal stuff going on and am not in very good cognitive shape, but regardless of the cause, if you want to talk about it in those terms I’d appreciate a clearer explanation.
To me the Trolley problem is largely about how much you’re willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard’s power to create more human-like desires.
The main difference between the cases that I see in the Trolley problems are “to what extent is the person you’re killing already in danger?” Being already on a track is pretty inherently dangerous. Being on a bridge in a mine isn’t as dangerous. Wandering into a hospital with healthy organs isn’t inherently dangerous at all.
Suppose the house elves were created just wanting to do chores. Would it be moral to leave them like that if you could make them more human? What if they had once been more human and you were now “reverting” them?
Ah-ha. Okay. I hadn’t thought of the trolley problem in those terms before. It’s not very relevant to how I’m thinking, though; I’m thinking in terms of what actions are acceptable from a given starting point, not what end states are acceptable.
As to house elves: I don’t consider humanike values to be intrinsically better than other values in the relevant sense—I disagree with Clippy about the ideal state of the world, and am likely to come into conflict with em in relevant cases, but if the world were arranged in such a way that beings with clippylike values could exist without being in conflict with beings with other values, I would have no objection to said being existing, and that’s basically the case with house elves. (And I don’t think it’s intrinsically wrong for Clippy to exist, just problematic enough that there are reasonable objections.)
I would consider causing house elves to have humanlike values equally problematic as causing humans to have house-elf-like values, regardless of whether the house elves were human to begin with, assuming that house elves are satisfied with their values and do not actively want to have humanlike values. Two wrongs don’t make a right.
But if the creatures enjoy their situation and manage to self replicate or are immortal isn’t the use of their labour more like a form of parasitism on the species?
One could argue parasitism is wrong but the act of creating them vunreable for parasitism seems neutral as long as they are capable of survival despite it.
Instead of creating them from scratch, would it be immoral to take a species that hated chores and wirehead them to enjoy chores?
I think it is. I mentioned this possiblity here:
However I think the tipping point starts way before “not a single line of derived code”:
requiring magical reinforcement would seem to indicate that the house elves would do other than follow orders given freedom. IIRC the elves are portrayed as not psychologically stable as well, perhaps indicating that programmed servitude (at least as executed in the HP universe) is not a stable equilibrium for a mind.
as for the concept of house elves if executed in such a manner that they freely choose to serve humans when given latitude i see no problem.
This is a good point abut the example I picked. However let me point out that many House Elves who where well treated resented attemts to be given freedom or insulted by rewards beyond kindness.
If such a creature was created treating it poorly seems intuitively very very wrong. Like kicking puppies wrong.
I agree.
This seems problematic if when designing minds, in particular minds that are prone to excessive pain and anguish, one could specify, as an additional fact about that mind, that it would value (act to promote) its own existence.
There are also issues if a different creature/mind could be designed to better fill the economic niche, that produces more surplus in paying for its existence, that values its own existence more.
Consider the creatures you would design to perform the function of the house elves, and what are the reasons that house elves are different that this.
I just want to state for the record that this is what I think Robin Hanson once argued in the context of transhuman creatures living on the bare minimum.
I may have misinterpreted the argument.
“Changing agents without their consent or knowledge seems obviously wrong...”
I disagree. If they’re producing more utility in the new form, it’s better. We should be deterred no more by their disagreement than by anybody else’s.
Nothing much. I’m cool with playing God.
Why does this matter? Suppose I create house elves based on a template of an existing, sapient, non-servant species. However, I do not use individuals as templates; the new house elf servants I create are as unique as any newborn unmodified elf. I can even create them as newborns and raise them appropriately.
What is wrong about this? How does creating new elves who enjoy being servants switch from Good to Evil due to existence of other, causally unrelated ‘wild’ elves, assuming I don’t harm them?
Note: given what Eliezer has shown of the wizarding world, I don’t doubt that whoever created house elves (if anyone did) modified existing grown elves to make the first generation. It seems intuitively simpler somehow when doing things by magic. And of course fits the whole ’accidentally sneezing sapience on a carrot without even THINKING of the moral consequences” theme :-)