I think diverting the train is a much more complicated situation that hinges on factors normally omitted in the description and considered irrelevant by most. It could go any of three ways, depending on factors irrelevant to the number of deaths. (In many cases the murderous action has already been taken, and the decision is whether one or ten people are murdered by the murderer, and the action or inaction is taken with only the decider, the train, and the murderer as participants)
Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?
Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.
If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).
If I don’t have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.
I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.
Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?
Nope. Oh, and the tracks join up after the people; you wont be sending a train careening off on the wrong track to crash into who knows what.
Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.
I think you may be mistaking legality for morality.
If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).
I’m not asking what you would have to do, I’m asking what you should do. Since prior agreements can mess with that, lets say the tracks are public property and anyone can change them, and you will not be punished for letting the people die.
If I don’t have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.
Murder has many definitions. Even if it would be “murder”, which is the moral choice: to kill one or to let ten die?
I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.
Could be. We would have to figure out why those seem different. But which of those choices is wrong? Are you saying that your analysis of the surgery leads you to change your mind about the train?
The tracks are public property; walking on the tracks is then a known hazard. Switching the tracks is ethically neutral.
The authority I was referencing was moral, not legal.
I was actually saying that my actions in some contrived circumstances would differ from what I believe is moral. I am actually comfortable with that. I’m not sure if I would be comfortable with an AI which either always followed a strict morality, nor with one that sometimes deviated.
Blaming the individuals for walking on the tracks is simply assuming the not-least convenient world though. What if they were all tied up and placed upon the tracks by some evil individual (who is neither 1 of the people on the tracks nor the 1 you can push onto the tracks)?
You still haven’t answered what the correct choice is if a villain put them there.
As for the rest … bloody hell, mate. Have you got some complicated defense of those positions or are they intuitions? I’m guessing they’re not intuitions.
I don’t think it would be relevant to the choice made in isolation what the prior events were.
Moral authority is only a little bit complicated to my view, but it incorporates autonomy and property and overlaps with the very complicated and incomplete social contract theory, and I think it requires more work before it can be codified into something that can be followed.
Frankly, I’ve tried to make sure that the conclusions follow reasonably from the premise, (all people are metaphysically equal) but it falls outside my ability to implement logic, and I suspect that it falls outside the purview of mathematics in any case. There are enough large jumps that I suspect I have more premises than I can explicate.
I decline to make value judgements beyond obligatory/permissible/forbidden, unless you can provide the necessary and sufficient conditions for one result to be better than another.
I think a good way to think of this result is that leaving the switch on “kill ten people” nets 0 points, moving it from “ten” to “one” nets, say, 9 points, and moving it from “one” to “ten” loses you 9 points.
I have no model that accounts for the surgery problem without crude patches like “violates bodily integrity = always bad.” Humans in general seem to have difficulties with “sacred values”; how many dollars is it worth to save one life? How many hours (years?) of torture?
I think I’m mostly a rule utilitarian, so I certainly understand the worth of rules...
… but that kind of rule really leaves ambiguous how to define any possible exceptions. Let’s say that you see a baby about to start chewing on broken glass—the vast majority would say that it’s obligatory to stop it from doing so, of the remainder most would say that it’s at least permissible to stop the baby from chewing on broken glass. But if we set up “violates bodily autonomy”=bad as an absolute rule, we are actually morally forbidden to physically prevent the baby from doing such.
So what are the exceptions? If it’s an issue of competence (the adult has a far better understanding of what chewing glass would do, and therefore has the right to ignore the baby’s rights to bodily autonomy), then a super-intelligent AI would have the same relationship in comparison to us...
Does the theoretical baby have the faculties to meaningfully enter an agreement, or to meaningfully consent to be stopped from doing harmful things? If so, then the baby is not an active moral agent, and is not considered sentient under the strict interpretation. Once the baby becomes an active moral agent, they have the right to choose for themselves if they wish to chew broken glass.
Under the loose interpretation, the childcare contract obligates the caretaker to protect, educate and provide for the child and grants the caretaker permission from the child to do anything required to fulfill that role.
What general rules do you follow that require or permit stopping a baby from chewing on broken glass, but prohibit forcibly stopping adults from engaging in unhealthy habits?
Yeah but… that’s false. Which doesn’t make the rule bad, heuristics are allowed to apply only in certain domains, but a “core rule” shouldn’t fail for over 15% of the population. “Sentient things that are able to argue about harm, justice and fairness are moral agents” isn’t a weaker rule than “Violating bodily autonomy is bad”.
Well, it’s less well-defined if nothing else. It’s also less general; QALYs enfold a lot of other values, so by maximizing them you get stuff like giving people happy, fulfilled lives and not shooting ’em in the head. It just doesn’t enfold all our values,so you get occasional glitches, like killing people and selling their organs in certain contrived situations.
Values also differ among even perfectly rational individuals. There are some who would say that killing people for their organs is the only moral choice in certain contrived situations, and reasonable people can mutually disagree on the subject.
I’m trying to develop a system which follows logically from easily-defended principles, instead of one that is simply a restatement of personal values.
In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it’s arguments would not move you, nor yours it.
And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity’s sake.
Am I wrong, or is this not the argument you’re making? I suspect at least one of us is confused.
I didn’t claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
Oh, that makes sense. You’re trying to extrapolate your own ethics. Yeah, that’s how morality is usually discussed here, I was just confused by the terminology.
Why ‘should’ my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Extrapolating other people’s Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you ‘should’ do it. No?
Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.
What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as “blowing up buildings is a good thing” or “lynching black people is OK”.
(which need not include all members of the genus Homo)
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
Perhaps I haven’t made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you—but it would be a nonpreferred outcome, under your utility function.
I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.
No, I’m not saying it would inconvenience you, I’m saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.
Projecting your values onto my utility function will not lead to good conclusions.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
Projecting your values onto my utility function will not lead to good conclusions.
That wasn’t a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
… oh. It’s pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we’ll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.
I would be interested in discussing your views (known as “deathism” hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly—have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
Once again, I’m only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies’ lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don’t value it just assume I’m talking about some agent that does, one of Azimov’s robots or something.)
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
It’s nowhere near the default value system I encounter in meatspace.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
[/retraction]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways). Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction. There are others.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED.
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.
I think diverting the train is a much more complicated situation that hinges on factors normally omitted in the description and considered irrelevant by most. It could go any of three ways, depending on factors irrelevant to the number of deaths. (In many cases the murderous action has already been taken, and the decision is whether one or ten people are murdered by the murderer, and the action or inaction is taken with only the decider, the train, and the murderer as participants)
Let’s stipulate two scenarios, one in which the quandary is the result of a supervillain and one in which it was sheer bad luck.
Do I own the track, or am I designated by the person with ownership as having the authority to determine arbitrarily in what manner the junction may be operated? Do I have any prior agreement with regards to the operation of the junction, or any prior responsibility to protect lives at all costs?
Absent prior agreements, if I have that authority to operate the track, it is neutral whether I choose to use it or not. If I were to own and control a hospital, I could arbitrarily refuse to support consensual fatal organ donations on my premises.
If I have a prior agreement to save as many lives as possible at all costs, I must switch to follow that obligation, even if it means violating property rights. (Such an obligation also means that I have to assist with the forcible harvesting of organs).
If I don’t have the right to operate the junction according to my own arbitrary choice, I would be committing a small injustice on the owner of the junction by operating it, and the direct consequences of that action would also be mine to bear; if the one person who would be killed by my action does not agree to be, I would be murdering him in the moral sense, as opposed to allowing others to be killed.
I suspect that my actual response to these contrived situations would be inconsistent; I would allow disease to kill ten people, but would cause a single event which would kill ten people without my trivial action to kill one instead (assuming no other choice existed). I prefer to believe that is a fault in my implementation of morality.
Nope. Oh, and the tracks join up after the people; you wont be sending a train careening off on the wrong track to crash into who knows what.
I think you may be mistaking legality for morality.
I’m not asking what you would have to do, I’m asking what you should do. Since prior agreements can mess with that, lets say the tracks are public property and anyone can change them, and you will not be punished for letting the people die.
Murder has many definitions. Even if it would be “murder”, which is the moral choice: to kill one or to let ten die?
Could be. We would have to figure out why those seem different. But which of those choices is wrong? Are you saying that your analysis of the surgery leads you to change your mind about the train?
The tracks are public property; walking on the tracks is then a known hazard. Switching the tracks is ethically neutral.
The authority I was referencing was moral, not legal.
I was actually saying that my actions in some contrived circumstances would differ from what I believe is moral. I am actually comfortable with that. I’m not sure if I would be comfortable with an AI which either always followed a strict morality, nor with one that sometimes deviated.
Blaming the individuals for walking on the tracks is simply assuming the not-least convenient world though. What if they were all tied up and placed upon the tracks by some evil individual (who is neither 1 of the people on the tracks nor the 1 you can push onto the tracks)?
In retrospect, the known hazard is irrelevant.
You still haven’t answered what the correct choice is if a villain put them there.
As for the rest … bloody hell, mate. Have you got some complicated defense of those positions or are they intuitions? I’m guessing they’re not intuitions.
I don’t think it would be relevant to the choice made in isolation what the prior events were.
Moral authority is only a little bit complicated to my view, but it incorporates autonomy and property and overlaps with the very complicated and incomplete social contract theory, and I think it requires more work before it can be codified into something that can be followed.
Frankly, I’ve tried to make sure that the conclusions follow reasonably from the premise, (all people are metaphysically equal) but it falls outside my ability to implement logic, and I suspect that it falls outside the purview of mathematics in any case. There are enough large jumps that I suspect I have more premises than I can explicate.
Wait, would you say that while you are not obligated to save them, it would be better than leaving them die?
I decline to make value judgements beyond obligatory/permissible/forbidden, unless you can provide the necessary and sufficient conditions for one result to be better than another.
I ask because I checked and the standard response is that it would not be obligatory to save them, but it would be good.
I don’t have a general model for why actions are subrogatory or superogatory.
I think a good way to think of this result is that leaving the switch on “kill ten people” nets 0 points, moving it from “ten” to “one” nets, say, 9 points, and moving it from “one” to “ten” loses you 9 points.
I have no model that accounts for the surgery problem without crude patches like “violates bodily integrity = always bad.” Humans in general seem to have difficulties with “sacred values”; how many dollars is it worth to save one life? How many hours (years?) of torture?
I think that “violates bodily autonomy”=bad is a better core rule than “increases QALYs”=good.
I think I’m mostly a rule utilitarian, so I certainly understand the worth of rules...
… but that kind of rule really leaves ambiguous how to define any possible exceptions. Let’s say that you see a baby about to start chewing on broken glass—the vast majority would say that it’s obligatory to stop it from doing so, of the remainder most would say that it’s at least permissible to stop the baby from chewing on broken glass. But if we set up “violates bodily autonomy”=bad as an absolute rule, we are actually morally forbidden to physically prevent the baby from doing such.
So what are the exceptions? If it’s an issue of competence (the adult has a far better understanding of what chewing glass would do, and therefore has the right to ignore the baby’s rights to bodily autonomy), then a super-intelligent AI would have the same relationship in comparison to us...
Does the theoretical baby have the faculties to meaningfully enter an agreement, or to meaningfully consent to be stopped from doing harmful things? If so, then the baby is not an active moral agent, and is not considered sentient under the strict interpretation. Once the baby becomes an active moral agent, they have the right to choose for themselves if they wish to chew broken glass.
Under the loose interpretation, the childcare contract obligates the caretaker to protect, educate and provide for the child and grants the caretaker permission from the child to do anything required to fulfill that role.
What general rules do you follow that require or permit stopping a baby from chewing on broken glass, but prohibit forcibly stopping adults from engaging in unhealthy habits?
The former is an ethical injunction, the latter is a utility approximation. They are not directly comparable.
We do loads of things that violate children’s bodily autonomy.
And in doing so, we assert that children are not active moral agents. See also paternalism.
Yeah but… that’s false. Which doesn’t make the rule bad, heuristics are allowed to apply only in certain domains, but a “core rule” shouldn’t fail for over 15% of the population. “Sentient things that are able to argue about harm, justice and fairness are moral agents” isn’t a weaker rule than “Violating bodily autonomy is bad”.
Do you believe that the ability to understand the likely consequences of actions is a requirement for an entity to be an active moral agent?
Well, it’s less well-defined if nothing else. It’s also less general; QALYs enfold a lot of other values, so by maximizing them you get stuff like giving people happy, fulfilled lives and not shooting ’em in the head. It just doesn’t enfold all our values,so you get occasional glitches, like killing people and selling their organs in certain contrived situations.
Values also differ among even perfectly rational individuals. There are some who would say that killing people for their organs is the only moral choice in certain contrived situations, and reasonable people can mutually disagree on the subject.
And your point is...?
I’m trying to develop a system which follows logically from easily-defended principles, instead of one that is simply a restatement of personal values.
Seems legit. Could you give me an example of “easily-defended principals”, as opposed to “restatements of personal values”?
“No sentient individual or group of sentient beings is metaphysically privileged over any group or individual.”
That seems true, but the “should” in there would seem to label it a “personal value”. At least, if I’ve understood you correctly.
I’m completely sure that I didn’t understand what you meant by that.
Damn. Ok, try this: where did you get that statement from, if not an extrapolation of your personal values?
In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
It seems to me that an agent ccould coherently value people with purple eyes more than people with orange eyes. And it’s arguments would not move you, nor yours it.
And if you were magically convinced that the other was right, it would be near-impossible for you to defend their position; for all the agent might claim that we can never be certain if eyes are truly orange, or merely a yellowish red, and you might claim that purple eyed folk are rare, and should be preserved for diversity’s sake.
Am I wrong, or is this not the argument you’re making? I suspect at least one of us is confused.
I didn’t claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
Oh, that makes sense. You’re trying to extrapolate your own ethics. Yeah, that’s how morality is usually discussed here, I was just confused by the terminology.
… with the goal of reaching a point that is likely to be agreed on by as many people as possible, and then discussion the implications of that point.
Shouldn’t your goal be to extrapolate your ethics, then help everyone who shares those ethics (ie humans) extrapolate theirs?
Why ‘should’ my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
Extrapolating other people’s Ethics may or may not help you satisfy your own extrapolated goals, so I think that may be the only metric by which you can judge whether or not you ‘should’ do it. No?
Then there might be superrational considerations, whereby if you helped people sufficiently like you to extrapolate their goals, they would (sensu Gary Drescher, Good and Real) help you to extrapolate yours.
Well, people are going to extrapolate their ethics regardless. You should try to help them avoid mistakes, such as “blowing up buildings is a good thing” or “lynching black people is OK”.
Well sure. Psychopaths, if nothing else.
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
Perhaps I haven’t made myself clear. Their mistakes will, by definition, violate your (shared) ethics. For example, if they are mistakenly modelling black people as subhuman apes, and you both value human life, then their lynching blacks may never affect you—but it would be a nonpreferred outcome, under your utility function.
My utility function is separate from my ethics. There’s no reason why everything I want happens to be something which is moral.
It is a coincidence that murder is both unethical and disadvantageous to me, not tautological.
You may have some non-ethical values, as many do, but if your ethics are no part of your values, you are never going to act on them.
I am considering taking the position that I follow my ethics irrationally; that I prefer decisions which are ethical even if the outcome is worse. I know that position will not be taken well here, but it seems more accurate than the position that I value my ethics as terminal values.
No, I’m not saying it would inconvenience you, I’m saying it would be a Bad Thing, which you, as a human (I assume,) would get negative utility from. This is true for all agents whose utility function is over the universe, not eg their own experiences. Thus, say, a paperclipper should warn other paperclippers against inadvertently producing staples.
Projecting your values onto my utility function will not lead to good conclusions.
I don’t believe that there is a universal, or even local, moral imperative to prevent death. I don’t value a universe in which more QALYs have elapsed over a universe in which fewer QALYs have elapsed; I also believe that entropy in every isolated system will monotonically increase.
Ethics is a set of local rules which is mostly irrelevant to preference functions; I leave it to each individual to determine how much they value ethical decisions.
That wasn’t a conclusion; that was an example, albeit one I believe to be true. If there is anything you value, even if you are not experiencing it directly, then it is instrumentally good for you to help others with the same ethics to understand they value it too.
… oh. It’s pretty much a given around here that human values extrapolate to value life, so if we build an FAI and switch it on then we’ll all live forever, and in the mean time we should sign up for cryonics. So I assumed that, as a poster here, you already held this position unless you specifically stated otherwise.
I would be interested in discussing your views (known as “deathism” hereabouts) some other time, although this is probably not the time (or place, for that matter.) I assume you think everyone here would agree with you, if they extrapolated their preferences correctly—have you considered a top-level post on the topic? (Or even a sequence, if the inferential distance is too great.)
Once again, I’m only talking about what is ethically desirable here. Furthermore, I am only talking about agents which share your values; it is obviously not desirable to help a babyeater understand that it really, terminally cares about eating babies if I value said babies’ lives. (Could you tell me something you do value? Suffering or happiness or something? Human life is really useful for examples of this; if you don’t value it just assume I’m talking about some agent that does, one of Azimov’s robots or something.)
[EDIT: typos.]
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I’ll point out that “human” has a technical definition of “members of the genus homo” and includes species which are not even homo sapiens. If you wish to reference a different subset of entities, use a different term. I like ‘sentients’ or ‘people’ for a nonspecific group of people that qualify as active or passive moral agents (respectively).
Why?
Because the borogroves are mimsy.
There’s a big difference between a term that has no reliable meaning, and a term that has two reliable meanings one of which is a technical definition. I understand why I should avoid using the former (which seems to be the point of your boojum), but your original comment related to the latter.
What are the necessary and sufficient conditions to be a human in the non-taxonomical sense? The original confusion was where I was wrongfully assumed to be a human in that sense, and I never even thought to wonder if there was a meaning of ‘human’ that didn’t include at least all typical adult homo sapiens.
Well, you can have more than one terminal value (or term in your utility function, whatever.) Furthermore, it seems to me that “freedom” is desirable, to a certain degree, as an instrumental value of our ethics—after all, we are not perfect reasoners, and to impose our uncertain opinion on other reasoners, of similar intelligence, who reached different conclusions, seem rather risky (for the same reason we wouldn’t want to simply write our own values directly into an AI—not that we don’t want the AI to share our values, but that we are not skilled enough to transcribe them perfectly.
“Human” has many definitions. In this case, I was referring to, shall we say, typical humans—no psychopaths or neanderthals included. I trust that was clear?
If not, “human values” has a pretty standard meaning round here anyway.
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don’t think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the “immortalist” value system was assumed to be the LW default; I thought that “human value system” referred to a different default value system.
The “immortalist” value system is an approximaton of the “human value system”, and is generally considered a good one round here.
It’s nowhere near the default value system I encounter in meatspace. It’s also not the one that’s being followed by anyone with two fully functional lungs and kidneys. (Aside: that might be a good question to add to the next annual poll)
I don’t think mass murder in the present day is ethically required, even if by doing so would be a net benefit. Even if free choice hastens the extinction of humanity, there is no person or group with the authority to restrict free choice.
I don’t believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
Go ahead- consider a value function over the universe, that values human life and doesn’t privilege any one individual, and ask that function if the marginal inconvenience and expense of donating a lung and a kidney are greater than the expected benefit.
Well, no. This isn’t meatspace. There are different selection effects here.
[The second half of this comment is phrased far, far too strongly, even as a joke. Consider this an unnofficial “retraction”, although I still want to keep the first half in place.]
If free choice is hastening the extinction of humanity, then there should be someone with such authority. QED. [/retraction]
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity’s free choices no longer have the consequence of hastening extinction.
There are others.
One major possibility would be that the extinction of humanity is not negative infinity utility.
Well, I’m not sure how one would go about restricting freedom without “altering the environment”, and reeducation could also be construed as limiting freedom in some capacity (although that’s down to definitions.) I never described what tactics should be used by such a hypothetical authority.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
OK, QED is probably an exaggeration. Nevertheless, it seems trivially true that if “free choice” is causing something with as much negative utility as the extinction of humanity, then it should be restricted in some capacity.