Are you accusing me of somehow discriminating against agents that implement other utility functions?
Yes, here’s your last reply to me on just that topic:
Except that humans share a utility function, which doesn’t change. (...) Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say “this is wrong” just was we can say “this is a chair” or “there are five apples”.
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath [emphasis mine], and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt.
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
I would value the suffering of my child as more important than the suffering of your child.
That seems … kind of evil, to be honest.
Are most all parents “evil” in that regard?
I believe the technical term is “biased”.
My assertion is that all humans share utility—which is the standard assumption in ethics, and seems obviously true
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
(Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Yes, here’s your last reply to me on just that topic:
Also:
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I just realized I never replied to this. I definitely meant to. Must have accidentally closed the tab before clicking “comment”.
No. I believe they are mostly misinformed regarding animal intelligence and capacity for pain, conditions in slaughterhouses and farms etc.
[Edited as per Vaniver’s comment below]
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Dammit, I was literally about to remove that claim when you posted this :(