Well, take the alien hypothetical. We make contact with this alien race, and somehow they have almost the same values as us. They too have a sense of fun, and aesthetics, and they care about the interests of others. Are their interests still worth less than a human interests? And do we have any right to object if they feel that our interests are worth less than their own?
I can’t take seriously an ethical system that says, “Humans are more morally considerable, simply because I am human”. I need my ethical system to be blind to the facts of who I am. I could never expect a non-human to agree that humans were ethically special, and that failure to convince them becomes a failure to convince myself.
I feel like there’s a more fundamental objection here that I’m missing.
I need my ethical system to be blind to the facts of who I am.
Well sure, knock yourself out. I don’t feel that need though.
Edit:
Wow this is awkward, I’ve even got an old draft titled “Rawls and probability” where I explain why I think he’s wrong. I should really get back to working on that, even if I just publish an obsolete version.
Are their interests still worth less than a human interests?
Personally to me? Yes.
But if you make them close enough some alien’s interest might be worth more than some human’s interest. If you make them similar enough the difference in my actions practically disappear. You can also obviously make a alien species that I would prefer to humans, perhaps even all humans. This dosen’t change they take a small hit for not being me and extended me.
I’m somewhat selfish. But I’m not just somewhat selfish for me, I’m somewhat selfish for my mother, my sister, my girlfriend, my cousin, my best friend ect.
I could never expect a non-human to agree that humans were ethically special, and that failure to convince them becomes a failure to convince myself.
You could never expect a non-you to agree that you is ethically special. Does that failure to convince them become a failure to convince yourself?
Sure you can make a non-me that I prefer to me. I’m somewhat selfish, but I think I put weight on future universe states in themselves.
I can make a non-you that you would prefer to you?
Sure, I may try to stop you from making one though. Depends on the non-me I guess.
Konkvistador wakes up in a dimly lit perfectly square white room sitting on a chair staring at Omega
Omega: “Either you or your daughter can die. Here I have this neat existence machine when I turn it on in a few minutes, I can set the lever to you or Anna, the one that’s selected is reimplemented in atoms back on Earth, the other isn’t and his pattern is deleted. ”
Me: “I can’t let Anna die. Take me!”
Omega: “Ok, but before we do that. You can keep on existing or I make a daughter exist in your place.”
Me: “What Anna?
Omega: “No no a new daughter.”
Me: “Eh no thanks.”
Omega: “Are you sure?”
Me: “Pretty much.”
Omega: “Ok what if I gave you all the memories of several years spent with her?”
Me: “That would make her the same to me as Anna.”
Omega: “It would. Indeed I may have done that already. Anna may or may not exist currently. So about the memories of an extra daughter …”
Me: “No thanks.”
Omega: “Ok ok, would you like your memories of Anna taken away too?”
Me: “No.”
Omega: “Anna is totally made up, I swear!”
Me: “It seemed probable, the answer is no regardless. And yes to saving Anna’s life.”
Konkvistador dies and somewhere on Earth in a warm bed Anna awakes and takes her first breath, except no one knows it is her first time.
Just to make sure I understood: you can value the existence of a nonexistent person of whom you have memories that you know are delusional more than you value your own continued existence, as long as those memories contain certain properties. Yes?
So, same question: can you say more about what those properties are? (I gather from your example that being your daughter can be one of them, for example.)
Also… is it important that they be memories? That is, if instead of delusional memories of your time with Anna, you had been given daydreams about Anna’s imagined future life, or been given a book about a fictional daughter named Anna, might you make the same choice?
Just to make sure I understood: you can value the existence of a nonexistent person of whom you have memories that you know are delusional more than you value your own continued existence, as long as those memories contain certain properties. Yes?
Yes.
So, same question: can you say more about what those properties are? (I gather from your example that being your daughter can be one of them, for example.)
I discovered the daughter example purely empirically when doing thought experiments. It seems plausible there are other examples.
Also… is it important that they be memories? That is, if instead of delusional memories of your time with Anna, you had been given daydreams about Anna’s imagined future life, or been given a book about a fictional daughter named Anna, might you make the same choice?
Both of these would have significantly increased the probability that I would choose Anna over myself, but I think the more likley course of action is that I would choose myself.
If I have memories of Anna and my life with her, I find basically find myself in the “wrong universe” so to speak. The universe where Anna and my life for the past few years didn’t happen. I have the possibility to save either Anna or myself by putting one of us back in the right universe (turning this one into the one in my memory).
In any case I’m pretty sure that Omega can write sufficiently good books to want you to value Anna or Alice or Bob above your life. He could even probably make a good enough picture of a “Anna” or “Alice” or “Bob” for you to want her/him to live even at the expense of your own life.
Suppose one day you are playing around with some math and you discover a description of … I hope you can see where I’m going with this. Not knowing the relevant data set about the theoretical object, Anna, Bob or Cthulhu, you may not want to learn of them if you think it will make you want to prefer their existence to your own. But once you know them by definition you value their existence above your own.
This brings up some interesting associations not just with Basilisks but also with CEV in my mind.
You could never expect a non-you to agree that you is ethically special. Does that failure to convince them become a failure to convince yourself?
Yes, definitely. I don’t think I’m ethically special, and I don’t expect anyone else to believe that either. If you’re wondering why I still act in self-interest, see reply on other thread.
Reading this I wonder why you think pursuing systematized enlightened self-interest or whatever Konkvistador would call the philosophy as not ethics rather than different ethics?
Both racism and sexism are an unfair and inequal preference for a specific group, typically the one the racist or sexist is a member of. If you’re fine with preferring your personal species above and beyond what an equal consideration of interests would call for, I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.
It seems inconsistent to be a speciesist, but not permit others to be racist or sexist.
Both racism and sexism are an unfair and inequal preference for a specific group, typically the one the racist or sexist is a member of.
A racist or sexist may or may not dispute the unequal bit, I’m pretty sure they would dispute the unfair bit. Because duh, ze dosen’t consider it unfair. I’m not too sure what you mean by fair, can we taboo it or at least define it?
Also would you say there exist fair and equal preferences for a specific group that one belongs to? Note, I’m not asking you about racism or sexism specifically, but for any group (which I assume are all based on a characteristic the people in it share, be it a mole under their right eye, a love for a particular kind of music or a piece of paper saying “citizen”).
I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.
Sure why not, they should think about this stuff and come up with their own answers. Its not like there is an “objectively right morality” function floating around and I do think human values differ. I couldn’t honestly say I wouldn’t be too biased when trying to make a “fixed” version of someone’s morality, I think I would probably just end up with a custom tailored batch of rationalizations that would increment their morality towards my own, no matter what my intentions.
Though obviously if they come up with a different value system than my own, our goals may no longer be complimentary and we may indeed become enemies. But we may be enemies even if we have identical values, for example valuing survival in itself can easily pit you against other entities valuing the same, same goes for pure selfishness. Indeed sometimes fruitful cooperation is possible precisely because we have different values.
It isn’t like Omega ever told us that all humanity really does have coherent complimentary goals or that we are supposed to. Even if that’s what we are “supposed” to do, why bother?
You forgot classist and ableist. In any case it seems to be equally inconsistent to be opposed to specisim on those grounds and say not be opposed to being seflist, familyist and friendist. Or are you?
I don’t think so. Just because we’re more willing to help out our friends than random strangers doesn’t imply we should be fine with people going around shooting random strangers in their legs. Likewise, we could favor our species compared to nonhuman animals and still not be fine with some of their harsh farming conditions.
How much value do you place on nonhuman animal welfare?
I do think so. The last few exchanges we had where about “I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.”.
I demonstrated that I’m fine with “inequal” and supposedly “unfair” (can we define that word?) preferences. While it may simple to separate the two. Unwillingness to help out and harming people are in many circumstances (due to opportunity costs for starters) the same thing.
Just because we’re more willing to help out our friends than random strangers doesn’t imply we should be fine with people going around shooting random strangers in their legs.
What if I shoot a stranger who is attacking my friends, my family or myself in the legs? Or choose to run over strangers rather than my daughter in a trolley problem?
I’m more fine with the suffering of random strangers than I am with the suffering of my friends or family. I don’t think that I’m exceptional in this regard. Does this mean that their suffering has no value to me? No, obviously not, I would never torture someone to get my friend a car or make my elderly mother a sandwich.
Put aside my earlier notions of “inequal” and “unfair”… I don’t think they’re necessary for us to proceed on this issue.
You said these things were “bad when they on net hurt people”. I noticed you said people, and not non-human animals, but you have said that you put at least some value on non-human animals.
Likewise, you’ve agreed that the pro-friend, pro-family preference only carries so far. But how far does the pro-human preference go? Assuming we agree on (1) the quality of life of certain nonhuman animals as they are made for food, (2) the capabilities for these nonhuman animals to feel a range of pain, and (3) the change in your personal quality of life by adopting habits to avoid most to all of this food (three big assumptions), then it seems like you’re fine with a significant measure of spiecieism.
I guess if you’re reaction is “so what”, we might just have rather different terminal values, though I’m kind of surprised that would be the case.
You said these things were “bad when they on net hurt people”. I noticed you said people, and not non-human animals, but you have said that you put at least some value on non-human animals.
That was in the context of thinking about sexism and racism. I assumed they have little impact on non-humans.
But how far does the pro-human preference go? Assuming we agree on (1) the quality of life of certain nonhuman animals as they are made for food, (2) the capabilities for these nonhuman animals to feel a range of pain, and (3) the change in your personal quality of life by adopting habits to avoid most to all of this food (three big assumptions), then it seems like you’re fine with a significant measure of spiecieism.
I guess if you’re reaction is “so what”, we might just have rather different terminal values, though I’m kind of surprised that would be the case.
I could be underestimating how much animals suffer (I almost certainly am to a certain existent simply because it is not something I have researched, and less suffering is the comforting default answer), you could be overestimating how much you care about animals being in pain due to anthropomorphizing them somewhat.
Well, take the alien hypothetical. We make contact with this alien race, and somehow they have almost the same values as us. They too have a sense of fun, and aesthetics, and they care about the interests of others. Are their interests still worth less than a human interests? And do we have any right to object if they feel that our interests are worth less than their own?
I can’t take seriously an ethical system that says, “Humans are more morally considerable, simply because I am human”. I need my ethical system to be blind to the facts of who I am. I could never expect a non-human to agree that humans were ethically special, and that failure to convince them becomes a failure to convince myself.
I feel like there’s a more fundamental objection here that I’m missing.
Well sure, knock yourself out. I don’t feel that need though.
Edit: Wow this is awkward, I’ve even got an old draft titled “Rawls and probability” where I explain why I think he’s wrong. I should really get back to working on that, even if I just publish an obsolete version.
Personally to me? Yes.
But if you make them close enough some alien’s interest might be worth more than some human’s interest. If you make them similar enough the difference in my actions practically disappear. You can also obviously make a alien species that I would prefer to humans, perhaps even all humans. This dosen’t change they take a small hit for not being me and extended me.
I’m somewhat selfish. But I’m not just somewhat selfish for me, I’m somewhat selfish for my mother, my sister, my girlfriend, my cousin, my best friend ect.
You could never expect a non-you to agree that you is ethically special. Does that failure to convince them become a failure to convince yourself?
Huh. I can make a non-you that you would prefer to you?
That is not in fact obvious.
Can you say more about what properties that non-you would have?
Sure you can make a non-me that I prefer to me. I’m somewhat selfish, but I think I put weight on future universe states in themselves.
Sure, I may try to stop you from making one though. Depends on the non-me I guess.
Just to make sure I understood: you can value the existence of a nonexistent person of whom you have memories that you know are delusional more than you value your own continued existence, as long as those memories contain certain properties. Yes?
So, same question: can you say more about what those properties are? (I gather from your example that being your daughter can be one of them, for example.)
Also… is it important that they be memories? That is, if instead of delusional memories of your time with Anna, you had been given daydreams about Anna’s imagined future life, or been given a book about a fictional daughter named Anna, might you make the same choice?
Yes.
I discovered the daughter example purely empirically when doing thought experiments. It seems plausible there are other examples.
Both of these would have significantly increased the probability that I would choose Anna over myself, but I think the more likley course of action is that I would choose myself.
If I have memories of Anna and my life with her, I find basically find myself in the “wrong universe” so to speak. The universe where Anna and my life for the past few years didn’t happen. I have the possibility to save either Anna or myself by putting one of us back in the right universe (turning this one into the one in my memory).
In any case I’m pretty sure that Omega can write sufficiently good books to want you to value Anna or Alice or Bob above your life. He could even probably make a good enough picture of a “Anna” or “Alice” or “Bob” for you to want her/him to live even at the expense of your own life.
Suppose one day you are playing around with some math and you discover a description of … I hope you can see where I’m going with this. Not knowing the relevant data set about the theoretical object, Anna, Bob or Cthulhu, you may not want to learn of them if you think it will make you want to prefer their existence to your own. But once you know them by definition you value their existence above your own.
This brings up some interesting associations not just with Basilisks but also with CEV in my mind.
(deleted, this was just a readability fix for the above)
Fixed. Sorry for that.
Yes, definitely. I don’t think I’m ethically special, and I don’t expect anyone else to believe that either. If you’re wondering why I still act in self-interest, see reply on other thread.
Reading this I wonder why you think pursuing systematized enlightened self-interest or whatever Konkvistador would call the philosophy as not ethics rather than different ethics?
What is your opinion on things like racism and sexism?
They are bad when they on net hurt people? What kind of an answer where you expecting?
Both racism and sexism are an unfair and inequal preference for a specific group, typically the one the racist or sexist is a member of. If you’re fine with preferring your personal species above and beyond what an equal consideration of interests would call for, I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.
It seems inconsistent to be a speciesist, but not permit others to be racist or sexist.
A racist or sexist may or may not dispute the unequal bit, I’m pretty sure they would dispute the unfair bit. Because duh, ze dosen’t consider it unfair. I’m not too sure what you mean by fair, can we taboo it or at least define it?
Also would you say there exist fair and equal preferences for a specific group that one belongs to? Note, I’m not asking you about racism or sexism specifically, but for any group (which I assume are all based on a characteristic the people in it share, be it a mole under their right eye, a love for a particular kind of music or a piece of paper saying “citizen”).
Sure why not, they should think about this stuff and come up with their own answers. Its not like there is an “objectively right morality” function floating around and I do think human values differ. I couldn’t honestly say I wouldn’t be too biased when trying to make a “fixed” version of someone’s morality, I think I would probably just end up with a custom tailored batch of rationalizations that would increment their morality towards my own, no matter what my intentions.
Though obviously if they come up with a different value system than my own, our goals may no longer be complimentary and we may indeed become enemies. But we may be enemies even if we have identical values, for example valuing survival in itself can easily pit you against other entities valuing the same, same goes for pure selfishness. Indeed sometimes fruitful cooperation is possible precisely because we have different values.
It isn’t like Omega ever told us that all humanity really does have coherent complimentary goals or that we are supposed to. Even if that’s what we are “supposed” to do, why bother?
You forgot classist and ableist. In any case it seems to be equally inconsistent to be opposed to specisim on those grounds and say not be opposed to being seflist, familyist and friendist. Or are you?
I don’t think so. Just because we’re more willing to help out our friends than random strangers doesn’t imply we should be fine with people going around shooting random strangers in their legs. Likewise, we could favor our species compared to nonhuman animals and still not be fine with some of their harsh farming conditions.
How much value do you place on nonhuman animal welfare?
I do think so. The last few exchanges we had where about “I was interested in if you also were fine with others making other calls about what interests they would give inequal weight to.”.
I demonstrated that I’m fine with “inequal” and supposedly “unfair” (can we define that word?) preferences. While it may simple to separate the two. Unwillingness to help out and harming people are in many circumstances (due to opportunity costs for starters) the same thing.
What if I shoot a stranger who is attacking my friends, my family or myself in the legs? Or choose to run over strangers rather than my daughter in a trolley problem?
I’m more fine with the suffering of random strangers than I am with the suffering of my friends or family. I don’t think that I’m exceptional in this regard. Does this mean that their suffering has no value to me? No, obviously not, I would never torture someone to get my friend a car or make my elderly mother a sandwich.
Put aside my earlier notions of “inequal” and “unfair”… I don’t think they’re necessary for us to proceed on this issue.
You said these things were “bad when they on net hurt people”. I noticed you said people, and not non-human animals, but you have said that you put at least some value on non-human animals.
Likewise, you’ve agreed that the pro-friend, pro-family preference only carries so far. But how far does the pro-human preference go? Assuming we agree on (1) the quality of life of certain nonhuman animals as they are made for food, (2) the capabilities for these nonhuman animals to feel a range of pain, and (3) the change in your personal quality of life by adopting habits to avoid most to all of this food (three big assumptions), then it seems like you’re fine with a significant measure of spiecieism.
I guess if you’re reaction is “so what”, we might just have rather different terminal values, though I’m kind of surprised that would be the case.
That was in the context of thinking about sexism and racism. I assumed they have little impact on non-humans.
I could be underestimating how much animals suffer (I almost certainly am to a certain existent simply because it is not something I have researched, and less suffering is the comforting default answer), you could be overestimating how much you care about animals being in pain due to anthropomorphizing them somewhat.
Definitely a possibility, though I try to eliminate it.