I know I’m not the best at doing the background research, so I’m not saying this to criticize, but:
Hasn’t this issue been discussed in the philosophical literature long before there were House Elves in Harry Potter, or even HHG2G? I’m pretty sure it has, and it goes by a standard name, but I don’t know what that name is.
For my part, I don’t see anything wrong with creating House Elves as you’ve described (making them from something that’s not an existing species). If they really have a conscious experience that enjoys serving someone, then it’s anthromorphism on our part to view them as somehow oppressed—resenting oppression simply isn’t a feeling they would have; they weren’t constructed through an evolutionary history involving dominance contests.
I do, however, doubt that they could replicate human servants: for a House Elf to actually understand what you’re ordering it to do requires interpretive assumptions that we would naturally make when giving commands. Are these interpretive assumptions so fundamental to our psychology that House Elves would have to resent their status in order to understand commands as well as a human servant?
More recently, George Orwell wrote about the waiters in the Parisian restaurant where he was a dishwasher:
...never be sorry for a waiter. Sometimes when you sit in a restaurant, still stuffing yourself half an hour after closing time, you feel that the tired waiter at your side must surely be despising you. But he is not. He is not thinking as he looks at you, ‘What an overfed lout’; he is thinking, ‘One day, when I have saved enough money, I shall be able to imitate that man.’ He is ministering to a kind of pleasure he thoroughly understands and admires. And that is why waiters are seldom Socialists, have no effective trade union, and will work twelve hours a day — they work fifteen hours, seven days a week, in many cafes. They are snobs, and they find the servile nature of their work rather congenial.
Lots of people get real pleasure out of pleasing authority. Not out of being abused by authority, but rather by getting little pats on the head. Elves are not far from human at all in that way.
Dogs love to be praised by master, and that’s why we love them. Dogs were bred by humans from wolves to be obedient and submissive, if not sentient.
EDIT/P.S. Slightly off-topic. Epictetus, one of the founders of stoic philosophy, was himself a slave for much of his life. His thought was absolutely a rationalist. As far as I know, he never questioned slavery as an institution.
It may not be exactly what Aristotle had in mind, but I feel obliged to point out that some people do consider themselves to be natural slaves, and I would consider it rude to contradict them about that. If you want one they can be obtained on collarme.com, Or So I’ve Heard.
Not all of these people are sex slaves. Many of them are “service slaves”.
I, personally, want to be a service slave, aka “minion”, to someone whose life is dedicated to reducing x-risks.
The main purpose of this arrangement would be to maximize the combined effectiveness of me and my new master, at reducing x-risks. I seem to be no good at running my own life, but I am reasonably well-read on topics related to x-risks, and enjoy doing boring-but-useful things.
And I might as well admit that I would enjoy being a sex slave in addition to being a service slave, but that part of the arrangement is optional. But if you’re interested, I’m bisexual, and into various kinds of kink.
Adelene Dawner has generously offered to help train me to be a good minion. I plan to spend the next few months training with her, to gain some important skills, and to overcome some psychological issues that have been causing me lots of trouble.
I haven’t set up a profile on callarme.com for myself yet.
I was initially creeped out by this comment. Then I read on and got more creeped out. But at some point it got so weird that it turned back on itself and became awesome. It might be my favorite thing I’ve read here.
Us humans are so damned interesting. Not to mention diverse—in ways some people who drone on about diversity can’t even comprehend! And there is something special about a space which not only can make sense of the above comment but can tolerate it. And kudos Peerinfinity, for not being afraid to be seen as different. I would be.
And as a matter of self-observation I got more accepting of the above comment once it became about sex; which must be the result of some kind of liberal, sex-positive memetic infection. Why should I be more tolerant of sexual desires than other life desires?
Us humans are so damned interesting. Not to mention diverse—in ways some people who drone on about diversity can’t even comprehend! And there is something special about a space which not only can make sense of the above comment but can tolerate it. And kudos Peerinfinity, for not being afraid to be seen as different. I would be.
Peer is not currently ready to be a minion in any kind of stressful environment. Every authority figure he’s had so far has been of the ‘I won’t respect you unless you stand up for yourself’ type and not very sane, and as a result of that Peer has some very dysfunctional habits when it comes to interacting with such people. I’m already working on fixing that, but I expect that it will take at least a year before he’s able to deal with normal expressions of disapproval in a sane way, voluntarily communicate important information that the recipient might not like, and correctly parse the relative importance of an order given previously which was stated to be very important vs. an order given recently with more emotional weight behind it but no other statement that it’s unusually important. I’m confident he’ll get there, but it’s going to be a while. (This is also why I’m involved at all—the more neurotic aspects of this issue are painful to watch, leading me to want to try to fix it. I’m not sure if I’ll turn out to like having a minion around enough to keep one in the long term—it’s possible, but I also like living alone. In any case, Peer wants to be working for someone who’s actively involved in x-risk prevention, and I’m not.)
We’re also going to be working on practical skills; if anyone thinks they’ll be interested in taking Peer on when he’s done learning how to interact with sane people, it would be good for them to contact me about any skills they think they’d like him to gain. (My plans so far are pretty basic with a geeky twist: Cooking, cleaning, home maintenance, Lojban and/or sign language, computer hardware skills, social skills, organizational skills, etc. Peer already programs.)
Once we’ve settled into some kind of routine, I plan to set up a blog where Peer will document his progress. I’ll make a point of announcing it in the discussion section when that happens.
Peer turned out to be higher-maintenance than I expected, to the point where it obviously wasn’t going to work out. He went back home, and has been working on self-improvement there, with some amount of apparent success.
I’ve pointed this question out to him in case he wants to give more details.
Actually I saw a few who wanted to do cooking and housework: and one who was willing to give her master the money she earned from her job (provided he fed her).
the word slavery is used as a blanket term for far too many phenomena. IIRC “slaves” in roman times had regular work hours, could own property, could buy out their contract etc.
True. On the other hand, historically, perhaps the majority of humans since the invention of agriculture have lived lives of what we would regard as continual drudge work, poverty, little or no protection from authority, and very little opportunity for advancement. Until recently, philosophers—and everyone else—took it for granted that this was how the world would always work. Certainly some “slaves” were privileged, but by the same token, huge numbers of nominally “free” people had it worse than canonical house elves. Even today, this is how it is for many, many people.
I’d suggest that most or all hierarchies of any kind try to make use of the natural house-elfishness inherent in the majority of people.
On the other hand, Eric Hoffer in the True Believer suggested that mass movements—including the more fanatical kind of patriotism—were somewhat different than less dynamic, more established hierarchies, like armies. As I understand it, most combat soldiers are directly motivated more by loyalty to their platoon rather than to Democracy.
Are these interpretive assumptions so fundamental to our psychology that House Elves would have to resent their status in order to understand commands as well as a human servant?
It seems to me extremely improbable that this should be the case.
What you’re suggesting is that in the Vast space of all possible minds (that would fit in a head), the only ones who understand what you mean when you tell them to clean the dishes are ones who resent their status. If true that would mean an FAI cannot exist, either.
The obvious objection here would be if house-elves were not like humans in their ability to model humans; maybe they were created with the necessary brain components to emulate a human being at will or something. But, somehow, that doesn’t strike me as the kind of thing the wizards responsible for their creation would think of.
I think you’re underestimating how much you’re constraining the mindspace when you require that the Elf interpret the soundwaves you’re emitting as a command whose meaning matches the meaning in your mind. Even if it correctly makes the [sound wave] → “clean” conversion, what about its mind makes it know what sense of “clean” you intend here, and what counts as clean?
I’m not asking because I want to make it sound like an insurmountable problem, but to show the problems inherent in making a mind assimilate all the heuristics of a human mind without being too human.
But you initially appeared to go further: you seemed to be claiming, not that being able to usefully interpret natural-language commands entails understanding a great deal about the world, but that it entails resenting oppression.
The former claim, I agree with completely; the latter claim I think needs far more support than I can see for it to be worth seriously considering.
So, if you are backing away from that latter claim, or weren’t making it in the first place, well and good. If you are making it, I’m interested in your reasons for believing it.
you seemed to be claiming, not that being able to usefully interpret natural-language commands entails understanding a great deal about the world, but that it entails resenting oppression. … I’m interested in your reasons for believing it.
I’m not sure I can do the topic justice without writing a full article, but here’s my thinking: to sufficiently locate the “hypothesis” of what the Elf should be doing (given a human command), it requires some idea of human motives. Once it internally represents these human motives well enough to perform like a human servant, and sorts its own preferences the same way, then it has become a human in every way except that it distinguishes between humans and Elves, the former of which should be favored.
But once the Elf has absorbed this human-type generating function, then anything that motivates humans can motivate the Elf—including sympathizing with servant groups (literally: “believing that it would be good to help the servants gain rank relative to their masters”), and recognizing themselves, or at least other Elves, as being such a servant group.
You can patch these problems, of course, but each patch either makes the Elf more human (and thus wrong to treat as a servant class) or less effective at serving humans. For example, you could introduce a sort of blind spot that makes it (“mechanically”) output “that’s okay” whenever it observes treatment that it would regard as bad if done to a human. But if this is all that distinguishes Elves from humans, then the Elves start to bear too much cognitive similarity to humans who have undergone psychological abuse.
Well, right, but I’m left with the same question. I mean, yes, I agree that “once it internally represents these human motives well enough to perform like a human servant, and sorts its own preferences the same way,” then everything else you say follows, at least to a rough approximation.
But why need it sort its own preferences the same way humans do?
What seems to underlie this argument is an idea that no cognitive system can understand a human’s values well enough to predict its preferences without sharing those values… that I can’t understand what you want well enough to serve you unless I want the same things.
If that’s true, it’s news to me, so I’m interested in the arguments for it.
For example, it certainly seems possible to model other things in the world without myself becoming those things: I can develop a working model of what pleases and upsets my dog, and what she likely wants me to do when she behaves in certain ways, without myself being pleased or upset or wanting those things. Do you claim that’s an illusion?
But why need it sort its own preferences the same way humans do?
That is what I (thought I) was explaining in the following paragraphs. Once it a) knows what humans want, and b) desires acting in a way that matches that preference ranking, it must carve out a portion of the world’s ontology that excludes itself from being a recipient of that service.
It’s not that the Elf would necessarily want to be served like it serves others (although that is a failure mode too); it’s that the Elf would resemble a human well enough at that point that we would have to conclude that it’s wrong to treat it as a servant. The fact that it was made to enjoy it is no longer a defense, for the same reason it’s not a defense to say, “but I’ve already psychologically abused him/her enough that he/she enjoys this abuse!”
What seems to underlie this argument is an idea that no cognitive system can understand a human’s values well enough to predict its preferences without sharing those values
That’s not my premise. My premise is (simplifying a bit) that it’s the decision mechanism of a being that primarily determines its moral worth. From this it follows that beings adhering to decision mechanisms of similar enough depth and with similar enough values to humans ought to be regarded as human.
For that reason, I see a tradeoff between effectiveness at replicating humans vs. moral worth. You can make a perfect human replica, but at the cost of obligating yourself to treat it as having the rights of a human. See EY’s discussion of these issues in Nonperson predicates and Can’t Unbirth a Child.
An alien race could indeed model humans well enough to predict us—but at that point they would have to be regarded as being of similar moral worth to us (modulo any dissonance between our values).
I know I’m not the best at doing the background research, so I’m not saying this to criticize, but:
Hasn’t this issue been discussed in the philosophical literature long before there were House Elves in Harry Potter, or even HHG2G? I’m pretty sure it has, and it goes by a standard name, but I don’t know what that name is.
For my part, I don’t see anything wrong with creating House Elves as you’ve described (making them from something that’s not an existing species). If they really have a conscious experience that enjoys serving someone, then it’s anthromorphism on our part to view them as somehow oppressed—resenting oppression simply isn’t a feeling they would have; they weren’t constructed through an evolutionary history involving dominance contests.
I do, however, doubt that they could replicate human servants: for a House Elf to actually understand what you’re ordering it to do requires interpretive assumptions that we would naturally make when giving commands. Are these interpretive assumptions so fundamental to our psychology that House Elves would have to resent their status in order to understand commands as well as a human servant?
Aristotle thought that some people were naturally slaves, and it’s hard to overestimate the historical importance of Aristotle on philosophy. See http://www.suite101.com/content/aristotle-on-slavery—some-people-are-slaves-by-nature-a252374
More recently, George Orwell wrote about the waiters in the Parisian restaurant where he was a dishwasher:
...never be sorry for a waiter. Sometimes when you sit in a restaurant, still stuffing yourself half an hour after closing time, you feel that the tired waiter at your side must surely be despising you. But he is not. He is not thinking as he looks at you, ‘What an overfed lout’; he is thinking, ‘One day, when I have saved enough money, I shall be able to imitate that man.’ He is ministering to a kind of pleasure he thoroughly understands and admires. And that is why waiters are seldom Socialists, have no effective trade union, and will work twelve hours a day — they work fifteen hours, seven days a week, in many cafes. They are snobs, and they find the servile nature of their work rather congenial.
See http://ebooks.adelaide.edu.au/o/orwell/george/o79d/chapter14.html
Lots of people get real pleasure out of pleasing authority. Not out of being abused by authority, but rather by getting little pats on the head. Elves are not far from human at all in that way.
Dogs love to be praised by master, and that’s why we love them. Dogs were bred by humans from wolves to be obedient and submissive, if not sentient.
EDIT/P.S. Slightly off-topic. Epictetus, one of the founders of stoic philosophy, was himself a slave for much of his life. His thought was absolutely a rationalist. As far as I know, he never questioned slavery as an institution.
It may not be exactly what Aristotle had in mind, but I feel obliged to point out that some people do consider themselves to be natural slaves, and I would consider it rude to contradict them about that. If you want one they can be obtained on collarme.com, Or So I’ve Heard.
Well bother. They all seem to be sex slaves. Which is great and all but I was hoping to branch out a bit and recruit some manual labour as well...
sorry if this squicks anyone here, but...
Not all of these people are sex slaves. Many of them are “service slaves”.
I, personally, want to be a service slave, aka “minion”, to someone whose life is dedicated to reducing x-risks.
The main purpose of this arrangement would be to maximize the combined effectiveness of me and my new master, at reducing x-risks. I seem to be no good at running my own life, but I am reasonably well-read on topics related to x-risks, and enjoy doing boring-but-useful things.
And I might as well admit that I would enjoy being a sex slave in addition to being a service slave, but that part of the arrangement is optional. But if you’re interested, I’m bisexual, and into various kinds of kink.
Adelene Dawner has generously offered to help train me to be a good minion. I plan to spend the next few months training with her, to gain some important skills, and to overcome some psychological issues that have been causing me lots of trouble.
I haven’t set up a profile on callarme.com for myself yet.
I was initially creeped out by this comment. Then I read on and got more creeped out. But at some point it got so weird that it turned back on itself and became awesome. It might be my favorite thing I’ve read here.
Us humans are so damned interesting. Not to mention diverse—in ways some people who drone on about diversity can’t even comprehend! And there is something special about a space which not only can make sense of the above comment but can tolerate it. And kudos Peerinfinity, for not being afraid to be seen as different. I would be.
And as a matter of self-observation I got more accepting of the above comment once it became about sex; which must be the result of some kind of liberal, sex-positive memetic infection. Why should I be more tolerant of sexual desires than other life desires?
Upvoted for this.
Relevant details:
Peer is not currently ready to be a minion in any kind of stressful environment. Every authority figure he’s had so far has been of the ‘I won’t respect you unless you stand up for yourself’ type and not very sane, and as a result of that Peer has some very dysfunctional habits when it comes to interacting with such people. I’m already working on fixing that, but I expect that it will take at least a year before he’s able to deal with normal expressions of disapproval in a sane way, voluntarily communicate important information that the recipient might not like, and correctly parse the relative importance of an order given previously which was stated to be very important vs. an order given recently with more emotional weight behind it but no other statement that it’s unusually important. I’m confident he’ll get there, but it’s going to be a while. (This is also why I’m involved at all—the more neurotic aspects of this issue are painful to watch, leading me to want to try to fix it. I’m not sure if I’ll turn out to like having a minion around enough to keep one in the long term—it’s possible, but I also like living alone. In any case, Peer wants to be working for someone who’s actively involved in x-risk prevention, and I’m not.)
We’re also going to be working on practical skills; if anyone thinks they’ll be interested in taking Peer on when he’s done learning how to interact with sane people, it would be good for them to contact me about any skills they think they’d like him to gain. (My plans so far are pretty basic with a geeky twist: Cooking, cleaning, home maintenance, Lojban and/or sign language, computer hardware skills, social skills, organizational skills, etc. Peer already programs.)
Wow. The concept is fascinating.
Once we’ve settled into some kind of routine, I plan to set up a blog where Peer will document his progress. I’ll make a point of announcing it in the discussion section when that happens.
How’s it been going?
Peer turned out to be higher-maintenance than I expected, to the point where it obviously wasn’t going to work out. He went back home, and has been working on self-improvement there, with some amount of apparent success.
I’ve pointed this question out to him in case he wants to give more details.
Actually I saw a few who wanted to do cooking and housework: and one who was willing to give her master the money she earned from her job (provided he fed her).
Ok. Now this sounds promising!
Unfortunately, you’ll probably also have to have sex with them.
Well that’s a chore. :P
the word slavery is used as a blanket term for far too many phenomena. IIRC “slaves” in roman times had regular work hours, could own property, could buy out their contract etc.
True. On the other hand, historically, perhaps the majority of humans since the invention of agriculture have lived lives of what we would regard as continual drudge work, poverty, little or no protection from authority, and very little opportunity for advancement. Until recently, philosophers—and everyone else—took it for granted that this was how the world would always work. Certainly some “slaves” were privileged, but by the same token, huge numbers of nominally “free” people had it worse than canonical house elves. Even today, this is how it is for many, many people.
I wonder how much of bonding people into groups (notably the inculcation of patriotism) is equivalent to trying to create house elves.
I’d suggest that most or all hierarchies of any kind try to make use of the natural house-elfishness inherent in the majority of people.
On the other hand, Eric Hoffer in the True Believer suggested that mass movements—including the more fanatical kind of patriotism—were somewhat different than less dynamic, more established hierarchies, like armies. As I understand it, most combat soldiers are directly motivated more by loyalty to their platoon rather than to Democracy.
It seems to me extremely improbable that this should be the case.
What you’re suggesting is that in the Vast space of all possible minds (that would fit in a head), the only ones who understand what you mean when you tell them to clean the dishes are ones who resent their status. If true that would mean an FAI cannot exist, either.
But such an AI is presumably smart and powerful enough to model you from scratch in enough detail to understand what you want done. Us human beings (and, I would guess, house elves) have to rely on the cruder approach of imagining ourselves in the shoes of the requester, correcting for their situation based on whatever information we have that we think would make our behaviors differ, and then trying to use that model to predict the implied meaning of the other agent’s request. Hence anthropomorphism (house-elf-ism?). And that doesn’t always work too well; see Points of Departure.
The obvious objection here would be if house-elves were not like humans in their ability to model humans; maybe they were created with the necessary brain components to emulate a human being at will or something. But, somehow, that doesn’t strike me as the kind of thing the wizards responsible for their creation would think of.
I think you’re underestimating how much you’re constraining the mindspace when you require that the Elf interpret the soundwaves you’re emitting as a command whose meaning matches the meaning in your mind. Even if it correctly makes the [sound wave] → “clean” conversion, what about its mind makes it know what sense of “clean” you intend here, and what counts as clean?
I’m not asking because I want to make it sound like an insurmountable problem, but to show the problems inherent in making a mind assimilate all the heuristics of a human mind without being too human.
Well, this is certainly true as far as it goes.
But you initially appeared to go further: you seemed to be claiming, not that being able to usefully interpret natural-language commands entails understanding a great deal about the world, but that it entails resenting oppression.
The former claim, I agree with completely; the latter claim I think needs far more support than I can see for it to be worth seriously considering.
So, if you are backing away from that latter claim, or weren’t making it in the first place, well and good. If you are making it, I’m interested in your reasons for believing it.
I’m not sure I can do the topic justice without writing a full article, but here’s my thinking: to sufficiently locate the “hypothesis” of what the Elf should be doing (given a human command), it requires some idea of human motives. Once it internally represents these human motives well enough to perform like a human servant, and sorts its own preferences the same way, then it has become a human in every way except that it distinguishes between humans and Elves, the former of which should be favored.
But once the Elf has absorbed this human-type generating function, then anything that motivates humans can motivate the Elf—including sympathizing with servant groups (literally: “believing that it would be good to help the servants gain rank relative to their masters”), and recognizing themselves, or at least other Elves, as being such a servant group.
You can patch these problems, of course, but each patch either makes the Elf more human (and thus wrong to treat as a servant class) or less effective at serving humans. For example, you could introduce a sort of blind spot that makes it (“mechanically”) output “that’s okay” whenever it observes treatment that it would regard as bad if done to a human. But if this is all that distinguishes Elves from humans, then the Elves start to bear too much cognitive similarity to humans who have undergone psychological abuse.
Well, right, but I’m left with the same question. I mean, yes, I agree that “once it internally represents these human motives well enough to perform like a human servant, and sorts its own preferences the same way,” then everything else you say follows, at least to a rough approximation.
But why need it sort its own preferences the same way humans do?
What seems to underlie this argument is an idea that no cognitive system can understand a human’s values well enough to predict its preferences without sharing those values… that I can’t understand what you want well enough to serve you unless I want the same things.
If that’s true, it’s news to me, so I’m interested in the arguments for it.
For example, it certainly seems possible to model other things in the world without myself becoming those things: I can develop a working model of what pleases and upsets my dog, and what she likely wants me to do when she behaves in certain ways, without myself being pleased or upset or wanting those things. Do you claim that’s an illusion?
That is what I (thought I) was explaining in the following paragraphs. Once it a) knows what humans want, and b) desires acting in a way that matches that preference ranking, it must carve out a portion of the world’s ontology that excludes itself from being a recipient of that service.
It’s not that the Elf would necessarily want to be served like it serves others (although that is a failure mode too); it’s that the Elf would resemble a human well enough at that point that we would have to conclude that it’s wrong to treat it as a servant. The fact that it was made to enjoy it is no longer a defense, for the same reason it’s not a defense to say, “but I’ve already psychologically abused him/her enough that he/she enjoys this abuse!”
That’s not my premise. My premise is (simplifying a bit) that it’s the decision mechanism of a being that primarily determines its moral worth. From this it follows that beings adhering to decision mechanisms of similar enough depth and with similar enough values to humans ought to be regarded as human.
For that reason, I see a tradeoff between effectiveness at replicating humans vs. moral worth. You can make a perfect human replica, but at the cost of obligating yourself to treat it as having the rights of a human. See EY’s discussion of these issues in Nonperson predicates and Can’t Unbirth a Child.
An alien race could indeed model humans well enough to predict us—but at that point they would have to be regarded as being of similar moral worth to us (modulo any dissonance between our values).
OK, I think I understand you now. Thanks for clarifying.