I have a question about why humans see the following moral positions as different when really they look the same to me:
1) “I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don’t cooperate.”
2) “I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating.”
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with “the abstract idea of punishment” into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of “eating children” are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless “intelligent” and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been “illegitimately modified” or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary “credentials” for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren’t currently willing to provide such information, are there preconditions you could propose before you would do so?
Conversations with you are difficult because I don’t know how much I can assume that you’ll have (or pretend to have) a human-like motivational psychology… and therefore how much I need to re-derive things like social contract theory explicitly for you, without making assumptions that your mind works in a manner similar to my mind by virtue of our having substantially similar genomes, neurology, and life experiences as embodied mental agents, descended from apes, with the expectation of finite lives, surrounded by others in basically the same predicament. For example, I’m not sure about really fundamental aspects of your “inner life” like (1) whether you have a subconscious mind, or (2) if your value system changes over time on the basis of experience, or (3) roughly how many of you there are.
This, unfortunately, leads to abstract speech that you might not be able to parse if your language mechanisms are more about “statistical regularities of observed english” than “compiling english into a data structure that supports generic inference”. By the end of such posts I’m generally asking a lot of questions as I grope for common ground, but you general don’t answer these questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds because I could ask simple and concrete questions to clear things up within seconds. Perhaps the easiest thing would be to IM and then, assuming we’re both OK with it afterward, post the transcript of the IM here as the continuation of the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to chat :-)
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
Ok, so first of all, there’s a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn’t do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don’t want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don’t want the kids to be eaten, and we don’t want the adults to eat. We don’t want to balance any of these interests, because they go against our values. Just like you wouldn’t balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is “Well, of course you don’t want the punishments. That’s the point. So cooperate, or you’ll get punished. It’s not fair to exempt yourself from the rules.” And my reaction to position (2) is “We don’t want any baby-eating, so we’ll save you from being eaten, but we won’t let you eat any other babies. It’s not fair to exempt yourself from the rules.” This seems consistent to me.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn’t the baby-eaters’ universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to “free ride” off the sacrifices that the system requires of everyone?
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don’t offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn’t have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that’s at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don’t give consent to be eaten.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn’t have sufficient intelligence to do so anyways.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
It is not the adults’ preference that matters, but the adults’ best model of the childrens’ preferences. In this case there is an obvious reason for those preferences to differ—namely, the adult knows that he won’t be one of those eaten.
In extrapolating a child’s preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can’t extrapolate from a child whose fate is undecided to an adult that believes it won’t be eaten; that change alters its preferences.
It is not the adults’ preference that matters, but the adults’ best model of the childrens’ preferences.
Do you believe that all children’s preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
I would use a process like coherent extrapolated volition to decide which preferences to count—that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We’ve even evolved to convince ourselves that we actually care about morality and not self-interest. That’s likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) “I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don’t cooperate.”
2) “I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating.”
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with “the abstract idea of punishment” into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of “eating children” are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless “intelligent” and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been “illegitimately modified” or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary “credentials” for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren’t currently willing to provide such information, are there preconditions you could propose before you would do so?
I … understood about a tenth of that.
Conversations with you are difficult because I don’t know how much I can assume that you’ll have (or pretend to have) a human-like motivational psychology… and therefore how much I need to re-derive things like social contract theory explicitly for you, without making assumptions that your mind works in a manner similar to my mind by virtue of our having substantially similar genomes, neurology, and life experiences as embodied mental agents, descended from apes, with the expectation of finite lives, surrounded by others in basically the same predicament. For example, I’m not sure about really fundamental aspects of your “inner life” like (1) whether you have a subconscious mind, or (2) if your value system changes over time on the basis of experience, or (3) roughly how many of you there are.
This, unfortunately, leads to abstract speech that you might not be able to parse if your language mechanisms are more about “statistical regularities of observed english” than “compiling english into a data structure that supports generic inference”. By the end of such posts I’m generally asking a lot of questions as I grope for common ground, but you general don’t answer these questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds because I could ask simple and concrete questions to clear things up within seconds. Perhaps the easiest thing would be to IM and then, assuming we’re both OK with it afterward, post the transcript of the IM here as the continuation of the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to chat :-)
Oh, anyone can email me at clippy.paperclips@gmail.com.
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
I understand what you mean now.
Ok, so first of all, there’s a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn’t do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don’t want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don’t want the kids to be eaten, and we don’t want the adults to eat. We don’t want to balance any of these interests, because they go against our values. Just like you wouldn’t balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is “Well, of course you don’t want the punishments. That’s the point. So cooperate, or you’ll get punished. It’s not fair to exempt yourself from the rules.” And my reaction to position (2) is “We don’t want any baby-eating, so we’ll save you from being eaten, but we won’t let you eat any other babies. It’s not fair to exempt yourself from the rules.” This seems consistent to me.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn’t the baby-eaters’ universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to “free ride” off the sacrifices that the system requires of everyone?
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don’t offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn’t have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that’s at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don’t give consent to be eaten.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
It is not the adults’ preference that matters, but the adults’ best model of the childrens’ preferences. In this case there is an obvious reason for those preferences to differ—namely, the adult knows that he won’t be one of those eaten.
In extrapolating a child’s preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can’t extrapolate from a child whose fate is undecided to an adult that believes it won’t be eaten; that change alters its preferences.
Do you believe that all children’s preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
I would use a process like coherent extrapolated volition to decide which preferences to count—that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
And why do you think that such reflection would make the babies reverse the baby-eating policies?
Different topic spheres. One line sounds nicely abstract, while the other is just iffy.
Also killing people is different from betraying them. (Nice read: the real life section of tvtropes/moraleventhorizon)
With 1), you’re non-cooperator and the punisher is society in general. With 2), you play both roles at different times.
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We’ve even evolved to convince ourselves that we actually care about morality and not self-interest. That’s likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Sounds like Robin Hanson’s Homo Hypocritus theory.