Have you given a description of your own ethical philosophy anywhere? If not, could you summarize your intuitions/trajectory? Doesn’t need to be a complete theory or anything, I’m just informally polling the non-utilitarians here.
(Any other non-utilitarians who see this feel free to respond as well)
I feel like I’ve summarized it somewhere, but can’t find it, so here it is again (it is not finished, I know there are issues left to deal with):
Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I’m pretty sure we’ve got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don’t yet have a full account of “contextual relevance”, but basically what that’s there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.
However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I’m calling it “the principle of needless destruction”, but I’m probably going to re-name it later because “destruction” isn’t quite what I’m trying to capture. Basically, it means you shouldn’t go around “destroying” stuff without an adequate reason. Protecting a non-waived, non-forfeited right is always an adequate reason, but apart from that I don’t have a full explanation; how good the reason has to be depends on how severe the act it justifies is. (“I was bored” might be an adequate reason to pluck and shred a blade of grass, but not to set a tree on fire, for instance.) This principle has the effect, among others, of ruling out revenge/retribution/punishment for their own sakes, although deterrence and preventing recurrence of wrong acts are still valid reasons to punish or exact revenge/retribution.
In cases where rights conflict, and there’s no alternative that doesn’t violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn’t okay.) “The null action” is the one where you don’t do anything. This is because I uphold the doing-allowing distinction very firmly. Letting something happen might be bad, but it is never as bad as doing the same something, and is virtually never as bad as performing even a much more minor (but still bad) act.
I hold agents responsible for their culpable ignorance and anything they should have known not to do, as though they knew they shouldn’t have done it. Non-culpable ignorance and its results is exculpatory. Culpability of ignorance is determined by the exercise of epistemic virtues like being attentive to evidence etc. (Epistemologically, I’m an externalist; this is just for ethical purposes.) Ignorance of any kind that prevents something bad from happening is not exculpatory—this is the case of the would-be murderer who doesn’t know his gun is unloaded. No out for him. I’ve been saying “acts”, but in point of fact, I hold agents responsible for intentions, not completed acts per se. This lets my morality work even if solipsism is true, or we are brains in vats, or an agent fails to do bad things through sheer incompetence, or what have you.
Upvoted for spelling out so much, though I disagree with the whole approach (though I think I disagree with the approach of everyone else here too). This reads like pretty run of the mill deontology—but since I don’t know the field that well, is there anywhere you differ from most other deontologists?
Also, are rights axiomatic or is there a justification embedded in your concept of personhood (or somewhere else)?
The quintessential deontologist is Kant. I haven’t paid too much attention to his primary sources because he’s miserable to read, but what Kant scholars say about him doesn’t sound like what goes through my head. One place I can think of where we’d diverge is that Kant doesn’t forbid cruelty to animals except inasmuch as it can deaden humane intuitions; my principle of needless destruction forbids it on its own demerits. The other publicly deontic philosopher I know of is Ross, but I know him only via a two-minute unsympathetic summary which—intentionally or no—made his theory sound very slapdash, like he has sympathies to the “it’s sleek and pretty” defense of utilitarianism but couldn’t bear to actually throw in his lot with it.
The justification is indeed embedded in my concept of personhood. Welcome to personhood, here’s your rights and responsibilities! They’re part of the package.
Ross is an interesting case. Basically, he defines what I would call moral intuitions as “prima facie duties.” (I am not sure what ontological standing he thinks these duties have.) He then lists six important ones: beneficence, honour, non-maleficence, justice, self-improvement and… goodness, I forget the 6th. But essentially, all of these duties are important, and one determines the rightness of an act by reflection—the most stringent duty wins and becomes the actual moral duty.
E.g., you promised a friend that you would meet them, but on the way you come upon the scene of a car crash. A person is injured, and you have first aid training. So basically Ross says we have a prima facie duty to keep the promise (honour), but also to help the motorist (beneficence), and the more stringent one (beneficence) wins.
I like about it that: it adds up to normality, without weird consequences like act utilitarianism (harvest the traveler’s organs) or Kantianism (don’t lie to the murderer).
I don’t like about it that: it adds up to normality, i.e., it doesn’t ever tell me anything I don’t want to hear! Since my moral intuitions are what decides the question, the whole thing functions as a big rubber stamp on What I Already Thought. I can probably find some knuckle-dragging bigot within a 1-km radius who has a moral intuition that fags must die. He reads Ross & says: “Yeah, this guy agrees with me!” So there is a wrong moral intuition. On the other hand, before reading Peter Singer (a consequentialist), I didn’t think it was obligatory to give aid to foreign victims of starvation & preventable disease; now I think it is as obligatory as, in his gedanken experiment, pulling a kid out of a pond right beside you (even though you’ll ruin your running shoes). Ross would not have made me think of that; whatever “seemed” right to me, would be right.
I am also really, really suspicious of the a priori and the prima facie. It seems very handwavy to jump straight to these “duties” when the whole point is to arrive at them from something that is not morality—either consequences or through some sort of symmetry.
“The whole thing functions as a big rubber stamp on What I Already Thought”
Speaking as a (probably biased) consequentialist, I generally got the impression that this was pretty much the whole point of Deontology.
However, the example of Kant being against lying seems to go against my impression. Kantian deontology is based on reasoning things about your rules, so it seems to be consistent in that case.
Still, it seems to me that more mainstream Deontology allows you to simply make up new categories of acts (ex. lying is wrong, but lying to murderers is OK) in order to justify your intuitive response to a thought experiment. How common is it for Deontologists to go “yeah, this action has utterly horrific consequences, but that’s fine because it’s the correct action”, the way it is for Consequentialists to do the reverse?
(again noting that I’ve now heard about the example of Kant, I might be confusing Deontology with “inuitive morality” or “the noncentral fallacy”.)
So I think I have pretty good access to the concept of personhood but the existence of rights isn’t obvious to me from that concept. Is there a particular feature of personhood that generates these rights?
Rather than take the “horrible consequences” tack, I’ll go in the other direction. How possible is it that something can be deontologically right or wrong if that something is something no being cares about, nor do they care about any of its consequences, by any extrapolation of their wants, likes, conscious values, etc., nor should they think others care? Is it logically possible?
a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong...the would-be murderer who doesn’t know his gun is unloaded.
You seem to answer your own question in the quote you chose, even though it seems like you chose it to critique my inconsistent pronoun use. If no being cares about something, nor wants others to care about it, then they’re not likely to want to retain their rights over it, are they?
The sentences in which I chose “ey” are generic. The sentences in which I used “he” are about a single sample person.
So if they want to retain without interruption their right to, say, not have a symmetrical spherical stone at the edge of their lawn rotated without permission, they perforce care whether or not it is rotated? They can’t merely want a right? Or if they want a right, and have a right, and they don’t care to exercise the right, but want to retain the right, they can’t? What if the only reason they care to prohibit stone turning is to retain the right? Does that work? Is there a special rule saying it doesn’t?
As part of testing theories to see when they fail rather than succeed, my first move is usually to try recursion.
Regardless, you seem to believe that some other forms of deontology are wrong but not illogical, and believe consequentialist theories wrong or illogical. For example, a deontology otherwise like yours that valued attentiveness to evidence more you would label wrong and not illogical. I ask if you would consider a deontological theory invalid if it ignored wants, cares etc. of beings, not whether or not that is part of your theory.
If it’s not illogical and merely wrong, then is that to say you count that among the theories that may be true, if you are mistaken about facts, but not mistaken about what is illogical and not?
I think such a dentology would be illogical, but am to various degrees unsure about other theories, which is right and which wrong, and about the severity and number of wounds in the wrong ones. Because this deontology seems illogical, it makes me suspect of its cousin theories, as it might be a salient case exhibiting a common flaw.
I think it is more intellectually troubling than the hypothetical of committing a small badness to prevent a larger one, but as it is rarely raised presumably others disagree or have different intuitions.
I don’t see the point of mucking with the English language and causing confusion for the sake of feminism if the end result is that singular sample murderers are gendered. It seems like the worst of both worlds.
I don’t think people have the silly right you have described.
I don’t think your attempt at “recursion” is useful unless you are interested in rigorously defining “want” and “care” and any other words you are tempted to employ in that capacity.
I don’t think I have drawn on an especially convenient possible world.
I don’t think you’re reading me charitably, or accurately.
I don’t think you’re predicting my dispositions correctly.
I don’t think you’re using the words “invalid” or “illogical” to refer to anything I’m accustomed to using the words for.
I don’t think you make very much sense.
I don’t think I consulted you, or solicited your opinion about, my use of pronouns.
I don’t think you’re initiating this conversation in good faith.
I’m sorry you feel that way. I tried to be upfront about my positions that you would disfavor: a form of feminism and also deontology. Perhaps you interpreted as egregious malicious emphasis on differences what I intended as the opposite.
Also, I think what you’re interpreting as predicting dispositions wrongly is what I see as trying to spell out all possible objections as a way to have a conversation with you, rather than a debate in which the truth falls out of an argument. That means I raise objections that we might anticipate someone with a different system would raise, rather than setting up to clash with you.
I think that when you say I am not reading you charitably or accurately, you have taken what was a very reasonable misreading of my first comment and failed to update based on my second. I’m not talking about your theory. I’m trying to ask how fundamental the problems are in a somewhat related theory. Whether your theory escapes its gravity well of wrongness depends on both the distance from the mass of doom and its size. I hope that analogy was clear, as apparently other stuff hasn’t been. So you can probably imagine what I think, as it somewhat mirrors what you seem to think: you’re not reading me charitably, accurately, etc. I know you’re not innately evil, of course, that’s obvious and foundational to communication.
Why those particular rights? It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.
If you lived in a world where your system of rights didn’t typically lead to beneficial consequences, would you still believe them to be correct?
What do you mean, “these particular rights”? I haven’t presented a list. I mentioned one right that I think we probably have.
It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.
Oh, now, that was low.
If you lived in a world where your system of rights didn’t typically lead to beneficial consequences, would you still believe them to be correct?
Do you mean: does Alicorn’s nearest counterpart who grew up in such a world share her opinions? Or do you mean: if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context? They’re different questions.
Yeah, but most people don’t come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.
They’re different questions.
Now I’m curious. Is your answer to them different? Could you please answer both of those hypotheticals?
ETA: If your answer is different, then isn’t your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?
does Alicorn’s nearest counterpart who grew up in such a world share her opinions?
Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn’t even like chocolate chip cookie dough ice cream.
if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context?
No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn’t have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn’t make my morality “based on consequences”; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?
A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn’t OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn’t OAC also likely believe that inaction is inherently bad? I doubt OAC would say, “I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action.”
Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you’re peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?
It seems likely indeed that someone would do that.
If an orphanage exploded every time someone did nothing in a moral dilemma
I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.
One thing that’s very complicated and not fully fleshed out that I didn’t mention is that, in certain cases, one might be obliged to waive one’s own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.
It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions.
Agreed. It is also rather convenient that maximizing preference satisfaction rarely involves violating anyone’s rights and mostly jives with human intuitions.
And thats because normative ethics is just about trying to come up with nice sounding theories to explain our ethical intuitions.
Umm… torture vs dust specks is both counterintuitive and violates rights. Utilitarian consequentialists also flip the switch in the trolley problem, again violating rights.
It doesn’t sound nice or explain our intuitions. Instead, the goal is the most good for the most people.
maximizing preference satisfaction rarely involves violating anyone’s rights and mostly jives with human intuitions.
Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.
Thanks for writing this out. I think you’ll be unsurprised to learn that this substantially matches my own “moral code”, even though I am (if I understand the terminology correctly) a utilitarian.
I’m beginning to suspect that the distinction between these two approaches comes down to differences in background and pre-existing mental concepts. Perhaps it is easier, more natural, or more satisfying for certain people to think in these (to me) very high abstractions. For me, it is easier, more natural, and more satisfying to break down all of those lofty concepts and dynamics again, and again, until I’ve arrived (at least in my head) at the physical evolution of the world into successive states that have ranked value for us.
EDIT: FWIW, you have actually changed my understanding of deontology. Instead of necessarily involving unthinking adherence to rules handed down from on-high/outside, I can now see it as proceeding from more basic moral concepts.
Because while I agree that people have rights and that it’s wrong to violate them, rights are themselves derived from consequences and preferences (via contractarian bargaining), and also that “rights” refers to what governments ought to protect, not necessarily what individuals should respect (though most of the time, individuals should respect rights). For example, though in normal life, justice requires you* to not murder a shopkeeper and steal his wares, murder would be justified in a more extreme case, such as to push a fat man in front of a trolley, because in that case you’re saving more lives, which is more important.
My main disagreement, though, is that deontology (and traditional utilitarianism, and all agent-neutral ethical theories in general) is that it fails to give a sufficient explanation of why we should be moral.
.* By which I mean something like “in order to derive the benefits of possessing the virtue of justice”. I’m also a virtue ethicist.
Why, what wrong acts do you plan to commit in attempting to save the world?
Evil and cunning. No! I’ll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).
Do you believe that the world’s inhabitants have a right to your protection? Because if they do, that’ll excuse some things.
Absolutely!
Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:
Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a ‘subsistence paradise’. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?
Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?
Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?
Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I’m standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?
If there in fact something morally wrong about releasing the tech (your summary doesn’t indicate it clearly, but I’d expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I’m imperfect.
If there in fact something morally wrong about releasing the tech
I don’t know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.
Have you given a description of your own ethical philosophy anywhere? If not, could you summarize your intuitions/trajectory? Doesn’t need to be a complete theory or anything, I’m just informally polling the non-utilitarians here.
(Any other non-utilitarians who see this feel free to respond as well)
I feel like I’ve summarized it somewhere, but can’t find it, so here it is again (it is not finished, I know there are issues left to deal with):
Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I’m pretty sure we’ve got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don’t yet have a full account of “contextual relevance”, but basically what that’s there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.
However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I’m calling it “the principle of needless destruction”, but I’m probably going to re-name it later because “destruction” isn’t quite what I’m trying to capture. Basically, it means you shouldn’t go around “destroying” stuff without an adequate reason. Protecting a non-waived, non-forfeited right is always an adequate reason, but apart from that I don’t have a full explanation; how good the reason has to be depends on how severe the act it justifies is. (“I was bored” might be an adequate reason to pluck and shred a blade of grass, but not to set a tree on fire, for instance.) This principle has the effect, among others, of ruling out revenge/retribution/punishment for their own sakes, although deterrence and preventing recurrence of wrong acts are still valid reasons to punish or exact revenge/retribution.
In cases where rights conflict, and there’s no alternative that doesn’t violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn’t okay.) “The null action” is the one where you don’t do anything. This is because I uphold the doing-allowing distinction very firmly. Letting something happen might be bad, but it is never as bad as doing the same something, and is virtually never as bad as performing even a much more minor (but still bad) act.
I hold agents responsible for their culpable ignorance and anything they should have known not to do, as though they knew they shouldn’t have done it. Non-culpable ignorance and its results is exculpatory. Culpability of ignorance is determined by the exercise of epistemic virtues like being attentive to evidence etc. (Epistemologically, I’m an externalist; this is just for ethical purposes.) Ignorance of any kind that prevents something bad from happening is not exculpatory—this is the case of the would-be murderer who doesn’t know his gun is unloaded. No out for him. I’ve been saying “acts”, but in point of fact, I hold agents responsible for intentions, not completed acts per se. This lets my morality work even if solipsism is true, or we are brains in vats, or an agent fails to do bad things through sheer incompetence, or what have you.
Upvoted for spelling out so much, though I disagree with the whole approach (though I think I disagree with the approach of everyone else here too). This reads like pretty run of the mill deontology—but since I don’t know the field that well, is there anywhere you differ from most other deontologists?
Also, are rights axiomatic or is there a justification embedded in your concept of personhood (or somewhere else)?
The quintessential deontologist is Kant. I haven’t paid too much attention to his primary sources because he’s miserable to read, but what Kant scholars say about him doesn’t sound like what goes through my head. One place I can think of where we’d diverge is that Kant doesn’t forbid cruelty to animals except inasmuch as it can deaden humane intuitions; my principle of needless destruction forbids it on its own demerits. The other publicly deontic philosopher I know of is Ross, but I know him only via a two-minute unsympathetic summary which—intentionally or no—made his theory sound very slapdash, like he has sympathies to the “it’s sleek and pretty” defense of utilitarianism but couldn’t bear to actually throw in his lot with it.
The justification is indeed embedded in my concept of personhood. Welcome to personhood, here’s your rights and responsibilities! They’re part of the package.
Ross is an interesting case. Basically, he defines what I would call moral intuitions as “prima facie duties.” (I am not sure what ontological standing he thinks these duties have.) He then lists six important ones: beneficence, honour, non-maleficence, justice, self-improvement and… goodness, I forget the 6th. But essentially, all of these duties are important, and one determines the rightness of an act by reflection—the most stringent duty wins and becomes the actual moral duty.
E.g., you promised a friend that you would meet them, but on the way you come upon the scene of a car crash. A person is injured, and you have first aid training. So basically Ross says we have a prima facie duty to keep the promise (honour), but also to help the motorist (beneficence), and the more stringent one (beneficence) wins.
I like about it that: it adds up to normality, without weird consequences like act utilitarianism (harvest the traveler’s organs) or Kantianism (don’t lie to the murderer).
I don’t like about it that: it adds up to normality, i.e., it doesn’t ever tell me anything I don’t want to hear! Since my moral intuitions are what decides the question, the whole thing functions as a big rubber stamp on What I Already Thought. I can probably find some knuckle-dragging bigot within a 1-km radius who has a moral intuition that fags must die. He reads Ross & says: “Yeah, this guy agrees with me!” So there is a wrong moral intuition. On the other hand, before reading Peter Singer (a consequentialist), I didn’t think it was obligatory to give aid to foreign victims of starvation & preventable disease; now I think it is as obligatory as, in his gedanken experiment, pulling a kid out of a pond right beside you (even though you’ll ruin your running shoes). Ross would not have made me think of that; whatever “seemed” right to me, would be right.
I am also really, really suspicious of the a priori and the prima facie. It seems very handwavy to jump straight to these “duties” when the whole point is to arrive at them from something that is not morality—either consequences or through some sort of symmetry.
“The whole thing functions as a big rubber stamp on What I Already Thought”
Speaking as a (probably biased) consequentialist, I generally got the impression that this was pretty much the whole point of Deontology.
However, the example of Kant being against lying seems to go against my impression. Kantian deontology is based on reasoning things about your rules, so it seems to be consistent in that case.
Still, it seems to me that more mainstream Deontology allows you to simply make up new categories of acts (ex. lying is wrong, but lying to murderers is OK) in order to justify your intuitive response to a thought experiment. How common is it for Deontologists to go “yeah, this action has utterly horrific consequences, but that’s fine because it’s the correct action”, the way it is for Consequentialists to do the reverse? (again noting that I’ve now heard about the example of Kant, I might be confusing Deontology with “inuitive morality” or “the noncentral fallacy”.)
So I think I have pretty good access to the concept of personhood but the existence of rights isn’t obvious to me from that concept. Is there a particular feature of personhood that generates these rights?
That’s one of my not-finished things, is spelling out exactly why I think you get there from here.
Rather than take the “horrible consequences” tack, I’ll go in the other direction. How possible is it that something can be deontologically right or wrong if that something is something no being cares about, nor do they care about any of its consequences, by any extrapolation of their wants, likes, conscious values, etc., nor should they think others care? Is it logically possible?
You seem to answer your own question in the quote you chose, even though it seems like you chose it to critique my inconsistent pronoun use. If no being cares about something, nor wants others to care about it, then they’re not likely to want to retain their rights over it, are they?
The sentences in which I chose “ey” are generic. The sentences in which I used “he” are about a single sample person.
So if they want to retain without interruption their right to, say, not have a symmetrical spherical stone at the edge of their lawn rotated without permission, they perforce care whether or not it is rotated? They can’t merely want a right? Or if they want a right, and have a right, and they don’t care to exercise the right, but want to retain the right, they can’t? What if the only reason they care to prohibit stone turning is to retain the right? Does that work? Is there a special rule saying it doesn’t?
As part of testing theories to see when they fail rather than succeed, my first move is usually to try recursion.
Least convenient possible world, please.
Regardless, you seem to believe that some other forms of deontology are wrong but not illogical, and believe consequentialist theories wrong or illogical. For example, a deontology otherwise like yours that valued attentiveness to evidence more you would label wrong and not illogical. I ask if you would consider a deontological theory invalid if it ignored wants, cares etc. of beings, not whether or not that is part of your theory.
If it’s not illogical and merely wrong, then is that to say you count that among the theories that may be true, if you are mistaken about facts, but not mistaken about what is illogical and not?
I think such a dentology would be illogical, but am to various degrees unsure about other theories, which is right and which wrong, and about the severity and number of wounds in the wrong ones. Because this deontology seems illogical, it makes me suspect of its cousin theories, as it might be a salient case exhibiting a common flaw.
I think it is more intellectually troubling than the hypothetical of committing a small badness to prevent a larger one, but as it is rarely raised presumably others disagree or have different intuitions.
I don’t see the point of mucking with the English language and causing confusion for the sake of feminism if the end result is that singular sample murderers are gendered. It seems like the worst of both worlds.
I don’t think people have the silly right you have described.
I don’t think your attempt at “recursion” is useful unless you are interested in rigorously defining “want” and “care” and any other words you are tempted to employ in that capacity.
I don’t think I have drawn on an especially convenient possible world.
I don’t think you’re reading me charitably, or accurately.
I don’t think you’re predicting my dispositions correctly.
I don’t think you’re using the words “invalid” or “illogical” to refer to anything I’m accustomed to using the words for.
I don’t think you make very much sense.
I don’t think I consulted you, or solicited your opinion about, my use of pronouns.
I don’t think you’re initiating this conversation in good faith.
I’m sorry you feel that way. I tried to be upfront about my positions that you would disfavor: a form of feminism and also deontology. Perhaps you interpreted as egregious malicious emphasis on differences what I intended as the opposite.
Also, I think what you’re interpreting as predicting dispositions wrongly is what I see as trying to spell out all possible objections as a way to have a conversation with you, rather than a debate in which the truth falls out of an argument. That means I raise objections that we might anticipate someone with a different system would raise, rather than setting up to clash with you.
I think that when you say I am not reading you charitably or accurately, you have taken what was a very reasonable misreading of my first comment and failed to update based on my second. I’m not talking about your theory. I’m trying to ask how fundamental the problems are in a somewhat related theory. Whether your theory escapes its gravity well of wrongness depends on both the distance from the mass of doom and its size. I hope that analogy was clear, as apparently other stuff hasn’t been. So you can probably imagine what I think, as it somewhat mirrors what you seem to think: you’re not reading me charitably, accurately, etc. I know you’re not innately evil, of course, that’s obvious and foundational to communication.
I am exiting this conversation now. I believe it will net no good.
Kant’s answer, greatly simplified, is that rational agents will care about following moral rules, because that is part of rationality.
Why those particular rights? It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.
If you lived in a world where your system of rights didn’t typically lead to beneficial consequences, would you still believe them to be correct?
What do you mean, “these particular rights”? I haven’t presented a list. I mentioned one right that I think we probably have.
Oh, now, that was low.
Do you mean: does Alicorn’s nearest counterpart who grew up in such a world share her opinions? Or do you mean: if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context? They’re different questions.
Yeah, but most people don’t come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.
Now I’m curious. Is your answer to them different? Could you please answer both of those hypotheticals?
ETA: If your answer is different, then isn’t your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?
Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn’t even like chocolate chip cookie dough ice cream.
No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn’t have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn’t make my morality “based on consequences”; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?
A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn’t OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn’t OAC also likely believe that inaction is inherently bad? I doubt OAC would say, “I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action.”
Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you’re peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.
It seems likely indeed that someone would do that.
I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.
One thing that’s very complicated and not fully fleshed out that I didn’t mention is that, in certain cases, one might be obliged to waive one’s own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.
Agreed. It is also rather convenient that maximizing preference satisfaction rarely involves violating anyone’s rights and mostly jives with human intuitions.
And thats because normative ethics is just about trying to come up with nice sounding theories to explain our ethical intuitions.
Umm… torture vs dust specks is both counterintuitive and violates rights. Utilitarian consequentialists also flip the switch in the trolley problem, again violating rights.
It doesn’t sound nice or explain our intuitions. Instead, the goal is the most good for the most people.
I said:
Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.
Because she says so. Which is a good reason. Much as I have preferences for possible worlds because I say so.
Thanks for writing this out. I think you’ll be unsurprised to learn that this substantially matches my own “moral code”, even though I am (if I understand the terminology correctly) a utilitarian.
I’m beginning to suspect that the distinction between these two approaches comes down to differences in background and pre-existing mental concepts. Perhaps it is easier, more natural, or more satisfying for certain people to think in these (to me) very high abstractions. For me, it is easier, more natural, and more satisfying to break down all of those lofty concepts and dynamics again, and again, until I’ve arrived (at least in my head) at the physical evolution of the world into successive states that have ranked value for us.
EDIT: FWIW, you have actually changed my understanding of deontology. Instead of necessarily involving unthinking adherence to rules handed down from on-high/outside, I can now see it as proceeding from more basic moral concepts.
I find myself largely in agreement with most of this, despite being a consequentialist (and an egoist!).
Where’s the point of disagreement that makes you a consequentialist, then?
Because while I agree that people have rights and that it’s wrong to violate them, rights are themselves derived from consequences and preferences (via contractarian bargaining), and also that “rights” refers to what governments ought to protect, not necessarily what individuals should respect (though most of the time, individuals should respect rights). For example, though in normal life, justice requires you* to not murder a shopkeeper and steal his wares, murder would be justified in a more extreme case, such as to push a fat man in front of a trolley, because in that case you’re saving more lives, which is more important.
My main disagreement, though, is that deontology (and traditional utilitarianism, and all agent-neutral ethical theories in general) is that it fails to give a sufficient explanation of why we should be moral.
.* By which I mean something like “in order to derive the benefits of possessing the virtue of justice”. I’m also a virtue ethicist.
Consequentialism can override rules just where consequences can be calculated...which is very rarely.
Wow. You would try to stop me from saving the world. You are evil. How curious.
Why, what wrong acts do you plan to commit in attempting to save the world?
Do you believe that the world’s inhabitants have a right to your protection? Because if they do, that’ll excuse some things.
Evil and cunning. No! I’ll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).
Absolutely!
Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:
Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a ‘subsistence paradise’. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?
Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?
Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?
Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I’m standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?
Aw, thanks...?
If there in fact something morally wrong about releasing the tech (your summary doesn’t indicate it clearly, but I’d expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I’m imperfect.
That is promising. Would you let me kill Dave too?
If you’re in the room with Dave, why wouldn’t you just push the AI’s reset button yourself?
See link. Depends on how I think he would update. I would kill him too if necessary.
I don’t know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.