I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
It’s nowhere near the default value system I encounter in meatspace.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
[/retraction]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways). Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction. There are others.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
Why?
Because the borogroves are mimsy.
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED. [/retraction]
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
There are others.
One major possibility would be that the extinction of humanity is not negative infinity utility.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.