Could you elaborate on your reasoning behind the propostion ‘Killing: generally wrong’? Maybe that would allow me to explain myself and especially reformulate my question if there is anyone who thinks that killing is wrong regardless of an agent’s preferences.
Persons have a right not to be killed; persons who have waived or forfeited that right, and non-persons, are still entities which should not be destroyed absent adequate reason. Preferences come in with the “waived” bit, and the “adequate reason” bit, but even if nobody had any preferences (...somehow...) then it would still be wrong to kill people who retain their right not to be killed (this being the default, assuming the lack of preferences doesn’t paradoxically motivate anyone to waive their rights), and still be wrong to kill waived-rights or forfeited-rights persons, or non-persons, without adequate reason. I’m prepared to summarize that as “Killing: generally wrong”.
Fascinating. This view is utterly incomprehensible to me. I mean, I understand what you are saying, but I just can’t understand how or why you would believe such a thing.
The idea of “rights” as things that societies enact makes sense to me, but universal rights? I’d be interested on what basis you believe this. (A link or other reference is fine, too.)
I derived my theory by inventing something that satisfied as many of my intuitive desiderata about an ethical theory as possible. It isn’t perfect, or at least not yet (I expect to revise it as I think of better ways to satisfy more desiderata), but I haven’t found better.
What’s the justification for taking your intuitive desiderata as the most (sole?) important factor in deciding on an ethical theory?
As opposed to any of many other strategies, such as finding the theory which if followed would result in the greatest amount of (human?) fun, or find the theory that would be accepted by the greatest number of people who are almost universally (> 99%) regarded as virtuous people, or …
Unless “theory which would maximize human fun” or “theory that would be well-received among people popularly understood to be virtuous” are desiderata of mine, why the in the world should I use them? It would be circular to use them to craft my ethical theory because my ethical theory says to, incomprehensible to use them because somebody else’s says to, and unmotivated to use them for any other reason.
Yes, obviously. The question was in the first paragraph, not the second, which you seem to have gotten hung up on. The question, again, was: what’s the justification for taking your intuitive desiderata as the most (sole?) important factor in deciding on an ethical theory?
I gave examples of some strategies for choosing an ethical theory that some other people might choose only to show that it’s not obviously clear that your strategy is the sole or best strategy. So the question, again, is why do you think that particular strategy is the best one (assuming you considered others and you do believe that’s best)?
I’m not clear on what you’re suggesting. Are you asking why I used my intuitive desiderata, as opposed to someone else’s desiderata, or desiderata I picked out of a hat, or evitanda just to be contrary, or not seeking an ethical theory at all, or...? What’s the salient alternative here that I’m meant to justify dismissing?
I’m asking why you decided that “choose the theory that best satisfies my intuitive desiderata” was the best method of choosing a theory. What justifies that method of “choosing a theory”, if there is a justification and you did in fact think about it beforehand? If you did think about it, presumably you decided that was the best method of choosing a theory for some reason(s), and I’m asking what those reasons might be.
One alternative, for example, might be for me to critically analyze my intuitions beforehand and be skeptical that all my intuitions are good for me (in the sense that acting on those intuitions best furthers all my interests weighted accordingly), and I might then choose to disgard some of my intuitive desiderata or weight them in some way before proceeding with whatever else I’ve decided on as a method of choosing. I might decide to just accept the theory that is most respected by my parents, or my priest, or the ethics professors that I most admire. I might decide to accept a theory on the basis of anticipating the results that believing in the theory will have on me and choosing the theory with the best anticipated effect. I haven’t given the justifications here, because these are just examples, but if I were to follow one of those strategies, I would almost certainly have reasons for thinking that strategy was better than others I considered. Those reasons are what I was asking you about. Just to head off another potential misunderstanding, I’m not suggesting that you should have considered any of these or that any of these are better strategies. They’re just given as evidence of the fact that your strategy is not the only one.
I’m very curious what was so vague or poorly expressed or confusing in my original post if you (or anybody else) can identify something in particular.
Are you looking for a causal history or a theoretical justification...? Meh, I’ll just summarize both together.
Trying to unite my desiderata into a single theory that doesn’t eat itself proved a good means of reconciling or prioritizing my intuitions where they conflicted. (For instance, I had warring intuitions over whether to privilege the null action or commit myself to moral luck, and chose the former because my intuition against moral luck was stronger than my wariness of the doing-allowing distinction.) I find having reconciled/prioritized desiderata more comfortable and actionable, and codifying them into a decision procedure makes them easier to act on consistently.
I found all the theories I’d run across in academic puttering around to be deeply unsatisfactory in one or more ways; no authority figures I respected enough to consider emulating put forth coherent theories of their own. (One of my undergrad professors, I admired enough that I might have considered doing this, explicitly or just implicitly by letting him argue it to me in real time before I was equipped to argue back very well, but he didn’t talk about his personal views during ethics class and didn’t specialize in the field so I never found a paper on it by him or anything.) That meant I had to either not have one (which would lead to awkward silences when people in grad school asked me for my ethical opinions, and an uncomfortable lack of opinion when writing ethics papers, and no decision procedure to follow when I was uncertain of some real-life choice’s moral status), or make up my own. To make one up with “the best anticipated effect” would presuppose consequentialism, which I rejected pretty much as soon as I heard it. I wanted the ethical theory that would lead to me giving the right answers according to me, in a principled way, to ethical dilemmas where I already had a right answer in mind (e.g. let’s not murder homeless people for their organs thankyouverymuch), and let me pick my way through murkier cases in a way that felt right.
Thanks for the explanation. I was looking more for theoretical justification (if theoretical justification played a part for you in deciding how to choose an ethical theory). What I had in mind was, if you were going to try to convince other people that they should choose an ethical theory for the same reasons that you chose yours and should adopt the same theory you did, what would be the arguments that you would use to persuade them (limited to good-faith arguments that you actually believe rather than rhetorical strategies aimed primarily at convincing)? And there’s a little of that in your answer here. Thanks for your time.
What’s the justification for taking your intuitive desiderata as the most (sole?) important factor in deciding on an ethical theory?
Because she wanted to (where ‘wanted to’ indicates after fully reflecting on all relevant factors). Doing anything other than what she wanted to do would basically be signalling bullshit.
As opposed to any of many other strategies, such as finding the theory which if followed would result in the greatest amount of (human?) fun, or find the theory that would be accepted by the greatest number of people who are almost universally (> 99%) regarded as virtuous people, or …
Those are all things that might be included if they are intuitive desirata of Alicorn’s or she believes they are instrumentally useful in creating a theory that satisfies said desiderata. Either that or she is lying to signal naivety or submission.
I might have disagreed with this a few months ago, so, just in case people with brains similar enough to mine are reading this, I will make this as clear as possible.
She had to do what she wanted to do. As in deterministically had to. There is no physical object other than her brain that makes her decisions. There is no chain of causality that could cause her to make a decision that does not start with the desires in her brain.
EDIT: Eliezer has a better one for this:
“Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
There is no physical object other than her brain that makes her decisions.
Yep! But I would stop short of saying that “people all do what they want to do”. People tend not to reflect enough on their desires; they may act out of habit; they may not act on them even when they know what they are; and people may have hierarchies or communities of conflicting desires so that there isn’t even a clear answer to “what do I want?”
The quote seemed wrong to me the first time I read it, which is why I forgot about it and had to add it to my post afterward. This seems like part of the reason why.
“Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
I would not call an action that I do not decide to bring about “my action”.
What are we disagreeing on apart from wording? One can only do what is right if one desires to do what is right. There are many barriers between that and what actually gets done (which is why FAI is a good idea). A brain with Tourettes and one without Tourettes but with the same desires are effectively the same decision making process in different environments, up to the approximation that brains are decision making processes.
If my body were prone to murdering people and I were unable to stop this, I would consent to being jailed. I would advocate some form of isolation or similar for anyone with this problem.
If we taboo for a sec the words “right”, “wrong”, “should” and “should not”, how would I best approximate the concept of universal rights?
Here’s how: “Nearly everyone has a sense of personal sovereignty, in the sense that there exist elements of the universe that a person considers belonging to said person—so that if another agent acts to usurp or wrest control of such elements, a strong emotion of injustice is provoked. This sense of personal sovereignty will often conflict with the sense of others, especially if the sense of injustice of inflated to include physical or intellectual property: but if we minimize the territories to certain natural boundaries (like person’s bodies and minds), we can aggregate the individual territories to a large map of the universe, so that it will have huge tons of grey disputed areas but also some bright areas clearly labelled ‘Alex’s body belongs to Alex’s sovereignty’ or ‘Bob’s body falls to Bob’s sovereignty’. ”
What you say seems contrived to me. You could have uttered the exact opposite and it wouldn’t change anything about the nature of reality as a whole but solely the substructure that is Alicorn.
In my original comment I asked if anyone would (honestly) suggest that ‘killing is wrong’ is a moral imperative, that it is generally wrong. You asserted exactly that in your reply. I thought you misunderstood what I have been talking about. Now I am not so sure anymore. If that is really your opinion then I have no idea how you arrived at that belief.
The fact that I chose this equation is not built into the the universe in the same way as Faster than light travel: generally wrong. In fact, I chose differently in other Everett branches. The equation is still true. The fact that Alicorn came to have these specific moral beliefs is similarly nonfundamental, but killing is still objectively Alicorn_wrong.
I’m pretty sure I don’t understand anything you just said. Sorry.
Could you elaborate on your reasoning behind the propostion ‘Killing: generally wrong’? Maybe that would allow me to explain myself and especially reformulate my question if there is anyone who thinks that killing is wrong regardless of an agent’s preferences.
Persons have a right not to be killed; persons who have waived or forfeited that right, and non-persons, are still entities which should not be destroyed absent adequate reason. Preferences come in with the “waived” bit, and the “adequate reason” bit, but even if nobody had any preferences (...somehow...) then it would still be wrong to kill people who retain their right not to be killed (this being the default, assuming the lack of preferences doesn’t paradoxically motivate anyone to waive their rights), and still be wrong to kill waived-rights or forfeited-rights persons, or non-persons, without adequate reason. I’m prepared to summarize that as “Killing: generally wrong”.
Fascinating. This view is utterly incomprehensible to me. I mean, I understand what you are saying, but I just can’t understand how or why you would believe such a thing.
The idea of “rights” as things that societies enact makes sense to me, but universal rights? I’d be interested on what basis you believe this. (A link or other reference is fine, too.)
I derived my theory by inventing something that satisfied as many of my intuitive desiderata about an ethical theory as possible. It isn’t perfect, or at least not yet (I expect to revise it as I think of better ways to satisfy more desiderata), but I haven’t found better.
What’s the justification for taking your intuitive desiderata as the most (sole?) important factor in deciding on an ethical theory?
As opposed to any of many other strategies, such as finding the theory which if followed would result in the greatest amount of (human?) fun, or find the theory that would be accepted by the greatest number of people who are almost universally (> 99%) regarded as virtuous people, or …
Unless “theory which would maximize human fun” or “theory that would be well-received among people popularly understood to be virtuous” are desiderata of mine, why the in the world should I use them? It would be circular to use them to craft my ethical theory because my ethical theory says to, incomprehensible to use them because somebody else’s says to, and unmotivated to use them for any other reason.
Yes, obviously. The question was in the first paragraph, not the second, which you seem to have gotten hung up on. The question, again, was: what’s the justification for taking your intuitive desiderata as the most (sole?) important factor in deciding on an ethical theory?
I gave examples of some strategies for choosing an ethical theory that some other people might choose only to show that it’s not obviously clear that your strategy is the sole or best strategy. So the question, again, is why do you think that particular strategy is the best one (assuming you considered others and you do believe that’s best)?
I’m not clear on what you’re suggesting. Are you asking why I used my intuitive desiderata, as opposed to someone else’s desiderata, or desiderata I picked out of a hat, or evitanda just to be contrary, or not seeking an ethical theory at all, or...? What’s the salient alternative here that I’m meant to justify dismissing?
I’m asking why you decided that “choose the theory that best satisfies my intuitive desiderata” was the best method of choosing a theory. What justifies that method of “choosing a theory”, if there is a justification and you did in fact think about it beforehand? If you did think about it, presumably you decided that was the best method of choosing a theory for some reason(s), and I’m asking what those reasons might be.
One alternative, for example, might be for me to critically analyze my intuitions beforehand and be skeptical that all my intuitions are good for me (in the sense that acting on those intuitions best furthers all my interests weighted accordingly), and I might then choose to disgard some of my intuitive desiderata or weight them in some way before proceeding with whatever else I’ve decided on as a method of choosing. I might decide to just accept the theory that is most respected by my parents, or my priest, or the ethics professors that I most admire. I might decide to accept a theory on the basis of anticipating the results that believing in the theory will have on me and choosing the theory with the best anticipated effect. I haven’t given the justifications here, because these are just examples, but if I were to follow one of those strategies, I would almost certainly have reasons for thinking that strategy was better than others I considered. Those reasons are what I was asking you about. Just to head off another potential misunderstanding, I’m not suggesting that you should have considered any of these or that any of these are better strategies. They’re just given as evidence of the fact that your strategy is not the only one.
I’m very curious what was so vague or poorly expressed or confusing in my original post if you (or anybody else) can identify something in particular.
Are you looking for a causal history or a theoretical justification...? Meh, I’ll just summarize both together.
Trying to unite my desiderata into a single theory that doesn’t eat itself proved a good means of reconciling or prioritizing my intuitions where they conflicted. (For instance, I had warring intuitions over whether to privilege the null action or commit myself to moral luck, and chose the former because my intuition against moral luck was stronger than my wariness of the doing-allowing distinction.) I find having reconciled/prioritized desiderata more comfortable and actionable, and codifying them into a decision procedure makes them easier to act on consistently.
I found all the theories I’d run across in academic puttering around to be deeply unsatisfactory in one or more ways; no authority figures I respected enough to consider emulating put forth coherent theories of their own. (One of my undergrad professors, I admired enough that I might have considered doing this, explicitly or just implicitly by letting him argue it to me in real time before I was equipped to argue back very well, but he didn’t talk about his personal views during ethics class and didn’t specialize in the field so I never found a paper on it by him or anything.) That meant I had to either not have one (which would lead to awkward silences when people in grad school asked me for my ethical opinions, and an uncomfortable lack of opinion when writing ethics papers, and no decision procedure to follow when I was uncertain of some real-life choice’s moral status), or make up my own. To make one up with “the best anticipated effect” would presuppose consequentialism, which I rejected pretty much as soon as I heard it. I wanted the ethical theory that would lead to me giving the right answers according to me, in a principled way, to ethical dilemmas where I already had a right answer in mind (e.g. let’s not murder homeless people for their organs thankyouverymuch), and let me pick my way through murkier cases in a way that felt right.
Thanks for the explanation. I was looking more for theoretical justification (if theoretical justification played a part for you in deciding how to choose an ethical theory). What I had in mind was, if you were going to try to convince other people that they should choose an ethical theory for the same reasons that you chose yours and should adopt the same theory you did, what would be the arguments that you would use to persuade them (limited to good-faith arguments that you actually believe rather than rhetorical strategies aimed primarily at convincing)? And there’s a little of that in your answer here. Thanks for your time.
Because she wanted to (where ‘wanted to’ indicates after fully reflecting on all relevant factors). Doing anything other than what she wanted to do would basically be signalling bullshit.
Those are all things that might be included if they are intuitive desirata of Alicorn’s or she believes they are instrumentally useful in creating a theory that satisfies said desiderata. Either that or she is lying to signal naivety or submission.
I might have disagreed with this a few months ago, so, just in case people with brains similar enough to mine are reading this, I will make this as clear as possible.
She had to do what she wanted to do. As in deterministically had to. There is no physical object other than her brain that makes her decisions. There is no chain of causality that could cause her to make a decision that does not start with the desires in her brain.
EDIT: Eliezer has a better one for this:
“Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
Yep! But I would stop short of saying that “people all do what they want to do”. People tend not to reflect enough on their desires; they may act out of habit; they may not act on them even when they know what they are; and people may have hierarchies or communities of conflicting desires so that there isn’t even a clear answer to “what do I want?”
Yes, I agree with this.
The quote seemed wrong to me the first time I read it, which is why I forgot about it and had to add it to my post afterward. This seems like part of the reason why.
Have Tourettes.
If I had Tourettes, I would not call the part of my brain with Tourettes “me”.
Kind of the point.
Our actions are not directly determined by our desires.
I would not call an action that I do not decide to bring about “my action”.
What are we disagreeing on apart from wording? One can only do what is right if one desires to do what is right. There are many barriers between that and what actually gets done (which is why FAI is a good idea). A brain with Tourettes and one without Tourettes but with the same desires are effectively the same decision making process in different environments, up to the approximation that brains are decision making processes.
If only the courts accepted that as a defense. “If I say it aint me you must set free!”
If my body were prone to murdering people and I were unable to stop this, I would consent to being jailed. I would advocate some form of isolation or similar for anyone with this problem.
If we taboo for a sec the words “right”, “wrong”, “should” and “should not”, how would I best approximate the concept of universal rights?
Here’s how: “Nearly everyone has a sense of personal sovereignty, in the sense that there exist elements of the universe that a person considers belonging to said person—so that if another agent acts to usurp or wrest control of such elements, a strong emotion of injustice is provoked. This sense of personal sovereignty will often conflict with the sense of others, especially if the sense of injustice of inflated to include physical or intellectual property: but if we minimize the territories to certain natural boundaries (like person’s bodies and minds), we can aggregate the individual territories to a large map of the universe, so that it will have huge tons of grey disputed areas but also some bright areas clearly labelled ‘Alex’s body belongs to Alex’s sovereignty’ or ‘Bob’s body falls to Bob’s sovereignty’. ”
What you say seems contrived to me. You could have uttered the exact opposite and it wouldn’t change anything about the nature of reality as a whole but solely the substructure that is Alicorn.
Indeed, I have never claimed to have reality-altering superpowers such that I can make utterances that accomplish this. What’s your point?
In my original comment I asked if anyone would (honestly) suggest that ‘killing is wrong’ is a moral imperative, that it is generally wrong. You asserted exactly that in your reply. I thought you misunderstood what I have been talking about. Now I am not so sure anymore. If that is really your opinion then I have no idea how you arrived at that belief.
2^10=1024
The fact that I chose this equation is not built into the the universe in the same way as Faster than light travel: generally wrong. In fact, I chose differently in other Everett branches. The equation is still true. The fact that Alicorn came to have these specific moral beliefs is similarly nonfundamental, but killing is still objectively Alicorn_wrong.