In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it’s arguments would not move you, nor yours it.
And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity’s sake.
Am I wrong, or is this not the argument you’re making? I suspect at least one of us is confused.
I didn’t claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
Oh, that makes sense. You’re trying to extrapolate your own ethics. Yeah, that’s how morality is usually discussed here, I was just confused by the terminology.
Why ‘should’ my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Extrapolating other people’s Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you ‘should’ do it. No?
Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.
What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as “blowing up buildings is a good thing” or “lynching black people is OK”.
(which need not include all members of the genus Homo)
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
Perhaps I haven’t made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you—but it would be a nonpreferred outcome, under your utility function.
I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.
No, I’m not saying it would inconvenience you, I’m saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.
Projecting your values onto my utility function will not lead to good conclusions.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
Projecting your values onto my utility function will not lead to good conclusions.
That wasn’t a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
… oh. It’s pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we’ll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.
I would be interested in discussing your views (known as “deathism” hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly—have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
Once again, I’m only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies’ lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don’t value it just assume I’m talking about some agent that does, one of Azimov’s robots or something.)
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
It’s nowhere near the default value system I encounter in meatspace.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
[/retraction]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways). Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction. There are others.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.
That seems true, but the “should” in there would seem to label it a “personal value”. At least, if I’ve understood you correctly.
I’m completely sure that I didn’t understand what you meant by that.
Damn. Ok, try this: where did you get that statement from, if not an extrapolation of your personal values?
In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it’s arguments would not move you, nor yours it.
And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity’s sake.
Am I wrong, or is this not the argument you’re making? I suspect at least one of us is confused.
I didn’t claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
Oh, that makes sense. You’re trying to extrapolate your own ethics. Yeah, that’s how morality is usually discussed here, I was just confused by the terminology.
… with the goal of reaching a point that is likely to be agreed on by as many people as possible, and then discussion the implications of that point.
Shouldn’t your goal be to extrapolate your ethics, then help everyone who shares those ethics (ie humans) extrapolate theirs?
Why ‘should’ my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Extrapolating other people’s Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you ‘should’ do it. No?
Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.
Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as “blowing up buildings is a good thing” or “lynching black people is OK”.
Well sure. Psychopaths, if nothing else.
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
Perhaps I haven’t made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you—but it would be a nonpreferred outcome, under your utility function.
My utility function is separate from my ethics. There’s no reason why everything I want happens to be something which is moral.
It is a coincidence that murder is both unethical and disadvantageous to me, not tautological.
You may have some non-ethical values, as many do, but if your ethics are no part of your values, you are never going to act on them.
I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.
No, I’m not saying it would inconvenience you, I’m saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.
Projecting your values onto my utility function will not lead to good conclusions.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
That wasn’t a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.
… oh. It’s pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we’ll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.
I would be interested in discussing your views (known as “deathism” hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly—have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)
Once again, I’m only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies’ lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don’t value it just assume I’m talking about some agent that does, one of Azimov’s robots or something.)
[EDIT: typos.]
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
Why?
Because the borogroves are mimsy.
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED. [/retraction]
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
There are others.
One major possibility would be that the extinction of humanity is not negative infinity utility.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.