Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
You seem to be implying that designed death is worse. How do you figure?
I don’t. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls.
A statement like that needs a mathematical proof.
If it’s a terminal value for most people to suffer and grieve over the loss of individual life
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.
I’m really curious to know what you mean by ‘terminal meta-values’. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
No, I’m perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it’s likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people’s distress when you kill them, in which case remind me never to drink anything you give me.
… are you saying I’m foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn’t realize? Yes? Congratulations, you’re not a psychopath.
Tell you what. Instead of typing out the answer to that, I’m going to respond with a question: how do you* think people who join the military justify the fact that they will probably either kill or aid others in killing?
*(I do have an answer in mind, and I will post it, even if your response refutes it.)
I think they have many different justifications depending on the person, ranging from “it’s a necessary evil” to “I need to pay for college and can hopefully avoid getting into battle” to “only the lives of my own countrymen matter”, just like people can have many different justifications for why they’d approve modifying the terminal values of others.
No. Something can be bad without being worse than the other options, and people can be mistaken about whether something an action will kill people. This is quite separate from actually having no term for human life in their utility function.
There’s an important difference between “not bad” and “bad but justifiable under some circumstances”. I don’t think believers in abortion, execution or war believe that killing per se is morally neutral. Each of those three has its justification.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people.
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he’s actually a psychopath or, indeed, an actual salmon-person (those are still technically “human”, I assume.)
Neurotypical, that’s the tyranny of some supposedly existing elusive majority which has always (ever since living on trees) and will always (when colonizing the Canis Major Dwarf Galaxy) terminally value essentially the same things (such as strawberry ice cream, not killing people).
If your utility function differs, it is wrong, while theirs is right. (I’d throw in some reference to a divine calibration, but that would be overly sarcastic.)
I may be confused by the sarcasm here. Could you state your objection more clearly? Are you arguing “neurotypical” is not a useful concept? Are you accusing me of somehow discriminating against agents that implement other utility functions? Are you objecting to my assertion that creating an agent with a different utility function is usually instrumentally bad, because it is likely to attempt to implement that utility function to the exclusion of yours?
Are you accusing me of somehow discriminating against agents that implement other utility functions?
Yes, here’s your last reply to me on just that topic:
Except that humans share a utility function, which doesn’t change. (...) Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say “this is wrong” just was we can say “this is a chair” or “there are five apples”.
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath [emphasis mine], and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt.
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
I would value the suffering of my child as more important than the suffering of your child.
That seems … kind of evil, to be honest.
Are most all parents “evil” in that regard?
I believe the technical term is “biased”.
My assertion is that all humans share utility—which is the standard assumption in ethics, and seems obviously true
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
(Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
Possible outcome; better than most; boring. I don’t think that’s really something to strive for, but my values are not yours, I guess. Also, I’m assuming we’re just taking whether an outcome is desirable into account, not its probability of actually coming about.
I don’t. Suffering is bad, finite individual existence is not necessarily so.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I’m just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.
Note: Not trying to attack your position, just curious.
Fixed by whom, might I ask?
You seem to be implying that designed death is worse. How do you figure?
Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
I don’t. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
Ah… If you or I knew what to think, we’d be working on CEV right now, and we’d all be much less fucked than we currently are.
A statement like that needs a mathematical proof.
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.
If human terminal values need to be adjusted for this to be acceptable to them, then it is immoral by definition.
Looks like you and I have different terminal meta-values.
I’m really curious to know what you mean by ‘terminal meta-values’. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
Say, whether it is ever acceptable to adjust someone’s terminal values.
No, I’m perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it’s likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people’s distress when you kill them, in which case remind me never to drink anything you give me.
Typical mind fallacy?
… are you saying I’m foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn’t realize? Yes? Congratulations, you’re not a psychopath.
Everyone who voluntarily joins the military is a psychopath?
Tell you what. Instead of typing out the answer to that, I’m going to respond with a question: how do you* think people who join the military justify the fact that they will probably either kill or aid others in killing?
*(I do have an answer in mind, and I will post it, even if your response refutes it.)
I think they have many different justifications depending on the person, ranging from “it’s a necessary evil” to “I need to pay for college and can hopefully avoid getting into battle” to “only the lives of my own countrymen matter”, just like people can have many different justifications for why they’d approve modifying the terminal values of others.
So, despite the downvotes that bought me …
I said “non-psychopaths consider killing a Bad Thing.”
You said “But what about people who join the army?”
I said “What do you think?”
You said “I think they justify it as saving more lives than it kills, or come up with reasons it’s not really killing people”
I think this conversation is over, don’t you?
Do you see my point that there are plenty of ways by which somebody can consider killing as not-so-bad, without needing to be a psychopath?
No. Something can be bad without being worse than the other options, and people can be mistaken about whether something an action will kill people. This is quite separate from actually having no term for human life in their utility function.
There’s an important difference between “not bad” and “bad but justifiable under some circumstances”. I don’t think believers in abortion, execution or war believe that killing per se is morally neutral. Each of those three has its justification.
I believe abortion is morally neutral, at least for the first few months and probably more.
But I said “killing per se”.
“Neurotypical”… almost as powerful as True!
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he’s actually a psychopath or, indeed, an actual salmon-person (those are still technically “human”, I assume.)
Neurotypical, that’s the tyranny of some supposedly existing elusive majority which has always (ever since living on trees) and will always (when colonizing the Canis Major Dwarf Galaxy) terminally value essentially the same things (such as strawberry ice cream, not killing people).
If your utility function differs, it is wrong, while theirs is right. (I’d throw in some reference to a divine calibration, but that would be overly sarcastic.)
I may be confused by the sarcasm here. Could you state your objection more clearly? Are you arguing “neurotypical” is not a useful concept? Are you accusing me of somehow discriminating against agents that implement other utility functions? Are you objecting to my assertion that creating an agent with a different utility function is usually instrumentally bad, because it is likely to attempt to implement that utility function to the exclusion of yours?
Yes, here’s your last reply to me on just that topic:
Also:
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I just realized I never replied to this. I definitely meant to. Must have accidentally closed the tab before clicking “comment”.
No. I believe they are mostly misinformed regarding animal intelligence and capacity for pain, conditions in slaughterhouses and farms etc.
[Edited as per Vaniver’s comment below]
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Dammit, I was literally about to remove that claim when you posted this :(
Possible outcome; better than most; boring. I don’t think that’s really something to strive for, but my values are not yours, I guess. Also, I’m assuming we’re just taking whether an outcome is desirable into account, not its probability of actually coming about.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I’m just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.