Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible.
Utility functions have the same problem. See blow for more details.
The consequentialist, on the other hand, can apply his existing utility function to the new behaviors, or plug the new data into it, in order to come up with a reasonable re-evaluation of the morality (or lack thereof) of each behavior.
Huh? This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things, such as torturing children to cure malaria, that they find deontologically repugnant.
Utility functions have the same problem. See blow for more details.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things...
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
What problems would those be? The only problems you mentioned in your previous post are:
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible.
and
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto).
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.
Wait, what? What Bugmaster described sounds like the behavior of most of the consequentialists I’ve encountered.
Also, I don’t see what the linked situation (i.e. torture vs. malaria) has actually to do with the current issue. The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
When presented with a new potential behavior, in this case torturing children to cure malaria, that provides an actual consquentialist reason for doing something deontologically repugnant, he winds up doing logical back flips.
The issue is that the consequentialist has a secret set of deontological maxims, and he chose his utility function to avoid being forced to violate them; he thus has problems when it turns out he does have to violate them to maximize the utility function. His first reaction to this is frequently to deny that the repugnant action would in fact maximize his utility function, sometimes even resorting to anti-epistomology in order to do so. If that fails he will change his utility function, do this enough and the utility function starts to resemble a count of the number of maxim violations.
Edit: Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Second of all, torturing children is not a new behavior, in the way Bugmaster was using the phrase. A new behavior is something that wasn’t available before, wasn’t possible, like “copying digital media”. You couldn’t copy digital media in the year 1699 no matter what your moral beliefs were. You could, on the other hand, torture children all you liked.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
Second of all, torturing children is not a new behavior
True, but torturing children to cure malaria is. Another example that may make things clearer is wire-heading, which causes problems for a utility function that hasn’t sufficiently specified what it means by “pleasure” just as “copying digital media” can cause problems for maxims that haven’t specified what they mean by “theft”.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
My entire point is that you are ascribing things to consequentialism that are true of utilitarianism, but are not true of consequentialism-in-general.
Ok, I was occasionally talking about Von Neumann–Morgenstern consequentialism since that’s what most consequentialists around here are. If you mean something else by “consequentialism”, please define it. We may have a failure to communicate here.
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
The SEP article on consequentialism is, as usual, a good intro/summary. To give a flavor of what other kinds of consequentialism one may have, here, to a first approximation, is my take on the list of claims in the “Classic Utilitarianism” section of the article:
Consequentialism: yes. Actual Consequentialism: no. Direct Consequentialism: no. Evaluative Consequentialism: yes, provisionally. Hedonism: no. Maximizing Consequentialism: intuition says no, because it seems to exclude the notion of supererogatory acts. Aggregative Consequentialism: intuition says yes, but this is problematic (Bostrom 2011) [2], so perhaps not. Total Consequentialism: probably not (though average is wrong too; then again, without the aggregative property, I don’t think this problem even arises). Universal Consequentialism: intuition says no, but I have a feeling that this is problematic; then again, a “yes” answer to this, while clearly more consistent, fails to capture some very strong moral intuitions. Equal Consideration: see the universal property; same comment. Agent-neutrality: seems like obviously yes but this is one I admit I know little about the implications of.
As you can see, I reject quite a few of the claims that one must assent to in order to be a classic utilitarian (and a couple which are required for VNM-compliance), but I remain a consequentialist.
[1] Usually “things” = acts, “properties” = moral rightness. [2] Infinite Ethics
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
Should I take that to mean that only on the basis of consequences, or on the basis of consequences and other things?
Edit: Although one of the interesting conclusions of Bostrom’s aforementioned paper is that bounding aggregative consequentialism with deontology gives better[1] results than just applying consequentialism. (Which I take to cast doubt on the aggregative property, among other things, but it’s something to think about.)
[1] “Better” = “in closer accord with our intuitions”… sort of. More or less.
Ok, in that case most of my criticism of consequentialism still applies, just replace “utility function” with whatever procedure general consequentialists use to compute moral actions.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
It’s a naturalistic process. It’s certainly not arbitrarily pulled from nowhere. The fact is that we, humans, have certain moral intuitions. Those intuitions may be “arbitrary” in some abstract sense, but they certainly do exist, as actual, measurable facts about the world (since our brains are part of the world, and our brains are where those intuitions live).
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic. Robin Hanson wrote a paper on it (maybe multiple papers, but I recall one off the top of my head).
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have. Because they are what we use (the only thing we could use) to judge anything else that we select as the source of morality. Again, this stuff is all in the Sequences.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic.
If you mean the meta-ethics sequence, it’s an argument for why we base our morality on intuitions (and even then I don’t think that’s an entirely accurate summary), it’s argument for pure consequentialism is a lot weaker and relies entirely on the VNM theorem. Since you’ve claimed not to be a VNM consequentialist, I don’t see how that sequence helps you. Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have.
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
Which moral intuition is that...?
Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Yes, I studied some of them in college. My assessment of academic philosophers is that most of them are talking nonsense most of the time. There are exceptions, of course. If you want to talk about the positions of any particular philosopher(s), we can do that (although perhaps for that it might be worthwhile to start a new Discussion thread, or something). But just the fact that many philosophers think some particular thing isn’t strong evidence of anything interesting or convincing.
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world, whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such. Insofar as any of those formal systems describe any aspect of reality, we can look at reality and see that.
For morality there just isn’t anything else, beyond our intuitions.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Moral laws don’t exist anywhere outside of human brains, so in one sense this entire line of questioning is meaningless. It’s not like moral laws can actually compel you to do one thing or another, regardless of whether you are a consequentialist or a deontologist or what. Moral laws have force insofar as they are convincing to any humans who have the power to enforce them, whether this be humans deciding to follow a moral law in their own lives, or deciding to impose a moral law on others, etc.
If people’s moral intuitions differ then I guess those people will have to find some way to resolve that difference. (Or maybe not? In some cases they can simply agree to go their separate ways. But I suppose you’d say, and I’d agree, that those are not the interesting cases, and that we’re discussing those cases where the disagreement on morality causes conflict.)
I mean, I can tell you what tends to happen in practice when people disagree on morality. I can tell you what I in particular will do in any given case. But asking what people should do in cases of moral disagreement is just passing the buck.
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this? It doesn’t seem like you are, though; I get the sense that you are merely objecting to the suggestion that consequentialism has the answers, where deontology does not. If so, then I grant that it does not. However, these are not the questions on which basis I judge deontology to be inferior.
Rather, my point was that even if we grant that there are, or should be, absolute, unbreakable moral laws that judge actions, regardless of consequences (i.e. accept the basic premise of deontology), it’s entirely unclear what those laws should be, or where they come from, or how we should figure out what they are, or why these laws and not some others, etc. Consequentialism doesn’t have this problem. Furthermore, because moral intuitions are the only means by which we can judge moral systems, the question of whether a moral system satisfies our moral intuitions is relevant to whether we accept it. Deontology, imo, fails in this regard to a much greater degree than does consequentialism.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world,
Because our physical intuitions tell us that should work.
whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such.
Then why are we focusing on those particular formal systems? Also where do our ideas about how formal systems should work come from?
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this?
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology. Also, what do you hope that, don’t you want the issue resolved?
Because our physical intuitions tell us that should work.
I’m not really sure what you mean by this.
Then why are we focusing on those particular formal systems?
Why indeed? Mathematics does sometimes examine formal systems that have no direct tie to anything in the physical world, because they are mathematically interesting. Sometimes those systems turn out to be real-world-useful.
Also where do our ideas about how formal systems should work come from?
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way. Therefore, that is how they work. Why do we care? Well, because that’s an approach that allows us to discover/invent new math, and apply that math to solve problems.
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology.
Really? Kantian deontology, and definitely not rule consequentialism?
Also, what do you hope that, don’t you want the issue resolved?
I meant, by that, that such a claim would be clearly false. If you were claiming clearly false things then that would make this conversation less interesting. ;)
Because our physical intuitions tell us that should work.
I’m not really sure what you mean by this.
Where does your belief that observing the world will lead us to true beliefs come from?
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way.
First, where do those definitions come from? Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Really? Kantian deontology, and definitely not rule consequentialism?
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
I meant, by that, that such a claim would be clearly false.
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
… and many posts in the Sequences. (The posts/essays themselves aren’t an answer to “where does this belief come from”, but their content is.)
First, where do those definitions come from?
We made ’em up.
Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
I am passingly familiar with these systems. I don’t know why you would claim that they have anything to do with deontology, since the entire motivation for accepting superrationality is “it leads to better consequences”. If you follow unbreakable rules because doing so leads to better outcomes, then you are a consequentialist.
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
Um, ok, fair enough, so in that case how about we stop dancing around the issue, and I will just ask straight out:
Do you believe that deontology has a resolution to the aforementioned issues? Or no?
Upvoted for spotting something probably non-obvious: the parallel between Kantian ethics and certain decision theories seems quite interesting and never occurred to me. It’s probably worth exploring how deep it runs, perhaps the idea that being a rational agent in itself compels you inescapably to follow rules of a certain form might have some sort of reflection in these decision theories.
Also, [why] do you hope that, don’t you want the issue resolved?
I certainly would hope that there doesn’t turn out to be a universal cosmic moral law derivable from nothing but logic, if it happens to be a law I really hate like “you must kill kittens”. :)
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism.
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can see how someone might pick up this definition from context, based on some of the standard examples
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
most of the well known consequentialist dilemmas rely on forbidding considering the path
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
not caring about is one of the premises of the VNM theorem
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
I don’t think that’s a consequentialist thought experiment, though?
What do you mean by “consequentialist thought experiment”?
I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
What do you mean by “consequentialist thought experiment”?
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then…
(a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values?
(b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions).
What process of reflection are you using that you think leads people toward a single value?
Does it avoid the problems with my old one that I described?
Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
why should contemplating tradeoffs between how much we can get values force us to pick one?
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.” I’m not clear on where I claimed otherwise, though… can you point me at that claim?
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
For the consequentialist to actually start torturing children for this reason, he would have to know, to a high degree of certainty, that the utility function is maximized by torturing children. It may be that, given that he doesn’t have perfect knowledge, he is incapable of knowing that to the required degree. This would mean that he remains a consequentialist but could not be induced to torture children.
Edit: There’s also the possibility that his decision affects how other people make decisions, which is itself a sort of consequence that has to be weighed. If many of the people around him are deontologists, torturing children may have the side effect of making torturing children more acceptable to the deontologists around him, leading to those deontologists torturing children in cases that have bad consequences.
That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
Utility functions have the same problem. See blow for more details.
Huh? This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things, such as torturing children to cure malaria, that they find deontologically repugnant.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
What problems would those be? The only problems you mentioned in your previous post are:
and
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.
Wait, what? What Bugmaster described sounds like the behavior of most of the consequentialists I’ve encountered.
Also, I don’t see what the linked situation (i.e. torture vs. malaria) has actually to do with the current issue. The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
When presented with a new potential behavior, in this case torturing children to cure malaria, that provides an actual consquentialist reason for doing something deontologically repugnant, he winds up doing logical back flips.
The issue is that the consequentialist has a secret set of deontological maxims, and he chose his utility function to avoid being forced to violate them; he thus has problems when it turns out he does have to violate them to maximize the utility function. His first reaction to this is frequently to deny that the repugnant action would in fact maximize his utility function, sometimes even resorting to anti-epistomology in order to do so. If that fails he will change his utility function, do this enough and the utility function starts to resemble a count of the number of maxim violations.
Edit: Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Second of all, torturing children is not a new behavior, in the way Bugmaster was using the phrase. A new behavior is something that wasn’t available before, wasn’t possible, like “copying digital media”. You couldn’t copy digital media in the year 1699 no matter what your moral beliefs were. You could, on the other hand, torture children all you liked.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
True, but torturing children to cure malaria is. Another example that may make things clearer is wire-heading, which causes problems for a utility function that hasn’t sufficiently specified what it means by “pleasure” just as “copying digital media” can cause problems for maxims that haven’t specified what they mean by “theft”.
My entire point is that you are ascribing things to consequentialism that are true of utilitarianism, but are not true of consequentialism-in-general.
Ok, I was occasionally talking about Von Neumann–Morgenstern consequentialism since that’s what most consequentialists around here are. If you mean something else by “consequentialism”, please define it. We may have a failure to communicate here.
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
The SEP article on consequentialism is, as usual, a good intro/summary. To give a flavor of what other kinds of consequentialism one may have, here, to a first approximation, is my take on the list of claims in the “Classic Utilitarianism” section of the article:
Consequentialism: yes.
Actual Consequentialism: no.
Direct Consequentialism: no.
Evaluative Consequentialism: yes, provisionally.
Hedonism: no.
Maximizing Consequentialism: intuition says no, because it seems to exclude the notion of supererogatory acts.
Aggregative Consequentialism: intuition says yes, but this is problematic (Bostrom 2011) [2], so perhaps not.
Total Consequentialism: probably not (though average is wrong too; then again, without the aggregative property, I don’t think this problem even arises).
Universal Consequentialism: intuition says no, but I have a feeling that this is problematic; then again, a “yes” answer to this, while clearly more consistent, fails to capture some very strong moral intuitions.
Equal Consideration: see the universal property; same comment.
Agent-neutrality: seems like obviously yes but this is one I admit I know little about the implications of.
As you can see, I reject quite a few of the claims that one must assent to in order to be a classic utilitarian (and a couple which are required for VNM-compliance), but I remain a consequentialist.
[1] Usually “things” = acts, “properties” = moral rightness.
[2] Infinite Ethics
Should I take that to mean that only on the basis of consequences, or on the basis of consequences and other things?
Only, yes.
Edit: Although one of the interesting conclusions of Bostrom’s aforementioned paper is that bounding aggregative consequentialism with deontology gives better[1] results than just applying consequentialism. (Which I take to cast doubt on the aggregative property, among other things, but it’s something to think about.)
[1] “Better” = “in closer accord with our intuitions”… sort of. More or less.
Ok, in that case most of my criticism of consequentialism still applies, just replace “utility function” with whatever procedure general consequentialists use to compute moral actions.
No, I really don’t think that it does.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
It’s a naturalistic process. It’s certainly not arbitrarily pulled from nowhere. The fact is that we, humans, have certain moral intuitions. Those intuitions may be “arbitrary” in some abstract sense, but they certainly do exist, as actual, measurable facts about the world (since our brains are part of the world, and our brains are where those intuitions live).
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic. Robin Hanson wrote a paper on it (maybe multiple papers, but I recall one off the top of my head).
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have. Because they are what we use (the only thing we could use) to judge anything else that we select as the source of morality. Again, this stuff is all in the Sequences.
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
If you mean the meta-ethics sequence, it’s an argument for why we base our morality on intuitions (and even then I don’t think that’s an entirely accurate summary), it’s argument for pure consequentialism is a lot weaker and relies entirely on the VNM theorem. Since you’ve claimed not to be a VNM consequentialist, I don’t see how that sequence helps you. Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Which moral intuition is that...?
Yes, I studied some of them in college. My assessment of academic philosophers is that most of them are talking nonsense most of the time. There are exceptions, of course. If you want to talk about the positions of any particular philosopher(s), we can do that (although perhaps for that it might be worthwhile to start a new Discussion thread, or something). But just the fact that many philosophers think some particular thing isn’t strong evidence of anything interesting or convincing.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world, whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such. Insofar as any of those formal systems describe any aspect of reality, we can look at reality and see that.
For morality there just isn’t anything else, beyond our intuitions.
Moral laws don’t exist anywhere outside of human brains, so in one sense this entire line of questioning is meaningless. It’s not like moral laws can actually compel you to do one thing or another, regardless of whether you are a consequentialist or a deontologist or what. Moral laws have force insofar as they are convincing to any humans who have the power to enforce them, whether this be humans deciding to follow a moral law in their own lives, or deciding to impose a moral law on others, etc.
If people’s moral intuitions differ then I guess those people will have to find some way to resolve that difference. (Or maybe not? In some cases they can simply agree to go their separate ways. But I suppose you’d say, and I’d agree, that those are not the interesting cases, and that we’re discussing those cases where the disagreement on morality causes conflict.)
I mean, I can tell you what tends to happen in practice when people disagree on morality. I can tell you what I in particular will do in any given case. But asking what people should do in cases of moral disagreement is just passing the buck.
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this? It doesn’t seem like you are, though; I get the sense that you are merely objecting to the suggestion that consequentialism has the answers, where deontology does not. If so, then I grant that it does not. However, these are not the questions on which basis I judge deontology to be inferior.
Rather, my point was that even if we grant that there are, or should be, absolute, unbreakable moral laws that judge actions, regardless of consequences (i.e. accept the basic premise of deontology), it’s entirely unclear what those laws should be, or where they come from, or how we should figure out what they are, or why these laws and not some others, etc. Consequentialism doesn’t have this problem. Furthermore, because moral intuitions are the only means by which we can judge moral systems, the question of whether a moral system satisfies our moral intuitions is relevant to whether we accept it. Deontology, imo, fails in this regard to a much greater degree than does consequentialism.
Because our physical intuitions tell us that should work.
Then why are we focusing on those particular formal systems? Also where do our ideas about how formal systems should work come from?
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology. Also, what do you hope that, don’t you want the issue resolved?
I’m not really sure what you mean by this.
Why indeed? Mathematics does sometimes examine formal systems that have no direct tie to anything in the physical world, because they are mathematically interesting. Sometimes those systems turn out to be real-world-useful.
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way. Therefore, that is how they work. Why do we care? Well, because that’s an approach that allows us to discover/invent new math, and apply that math to solve problems.
Really? Kantian deontology, and definitely not rule consequentialism?
I meant, by that, that such a claim would be clearly false. If you were claiming clearly false things then that would make this conversation less interesting. ;)
Where does your belief that observing the world will lead us to true beliefs come from?
First, where do those definitions come from? Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
http://yudkowsky.net/rational/the-simple-truth
… and many posts in the Sequences. (The posts/essays themselves aren’t an answer to “where does this belief come from”, but their content is.)
We made ’em up.
http://lesswrong.com/lw/rs/created_already_in_motion/
I am passingly familiar with these systems. I don’t know why you would claim that they have anything to do with deontology, since the entire motivation for accepting superrationality is “it leads to better consequences”. If you follow unbreakable rules because doing so leads to better outcomes, then you are a consequentialist.
Um, ok, fair enough, so in that case how about we stop dancing around the issue, and I will just ask straight out:
Do you believe that deontology has a resolution to the aforementioned issues? Or no?
That article ultimately comes down to relying on our (evolved) intuition, which is exactly my point.
Once you self-modify to always follow those rules, you are no longer a consequentialist.
Quiet possibly.
Upvoted for spotting something probably non-obvious: the parallel between Kantian ethics and certain decision theories seems quite interesting and never occurred to me. It’s probably worth exploring how deep it runs, perhaps the idea that being a rational agent in itself compels you inescapably to follow rules of a certain form might have some sort of reflection in these decision theories.
I certainly would hope that there doesn’t turn out to be a universal cosmic moral law derivable from nothing but logic, if it happens to be a law I really hate like “you must kill kittens”. :)
Also:
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Except you defined consequentialism as only caring about consequences.
Yes. What contradiction do you see...?
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
Very carefully.
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Those are two different consequences.
What do you mean? If I dispose of the body well enough I can make the final outcome atom-for-atom identical.
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
My point is that this what I meant by expanding the definition of “consequences” here.
That is the usual meaning; at least, I thought it was. Perhaps what we have here is a sound/sound dispute.
I dunno.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
Sorry, I’m not following.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
That’s your privilege, of course. Thanks for your time.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
So you’re willing to allow summing over feelings of violated boundaries, but not summing over actual violated boundaries, interesting.
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
Well, caring about the path renders the independence axiom meaningless.
Really? Again, I’m not an expert, but …
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
What do you mean by “consequentialist thought experiment”?
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
Well, as Eliezer explained here, simple moral systems are in fact likely to be wrong.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then… (a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values? (b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions). What process of reflection are you using that you think leads people toward a single value? Does it avoid the problems with my old one that I described? Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
Edited: formatting.
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So what, on your view, is the simple thing that humans actually value?
Pleasure, as when humans have enough of it (wireheading) they will like it more than anything else.
(nods) Well, that’s certainly simple.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
Yes, that’s true. If you believe what I’m offering doesn’t exist, it follows that you ought not believe I’ll follow through on that offer.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.”
I’m not clear on where I claimed otherwise, though… can you point me at that claim?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
https://en.wikipedia.org/wiki/Implicature https://en.wikipedia.org/wiki/Cooperative_principle
I see.
OK. Thanks for clearing that up.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
What response were you expecting?
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
Ah, OK. And when you talk about “values”, you mean exclusively the things that control what we like, and not the things that control what we want.
Have I got that right?
That is correct. As I see it, wants aren’t important in themselves, only as far as they’re correlated with and indicate likes.
OK. Thanks for clarifying your position.
How would you test this theory?
Give people pleasure, and see whether they say they like it more than other things they do.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
Do you mean “forcibly wirehead people and see if they decide to remove the pleasure feedback”? Also, see this post.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
“This theory is too simple” is something that can be argued against almost any theory you disagree with. That’s why it’s fully general.
No, it isn’t: Anyone familiar with the linguistic havoc sociological theory of systems deigns to inflict on its victims will assure you of that!
Ok, so what’s an example of something that doesn’t count as a “consequence” by your definition?
Beats me. Why does that matter?
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
So what do you do on counterfactual mugging, or Newcomb’s problem for that matter?
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
???
One of these cases involves the consequence that someone gets killed. How is that not morally neutral?
For the consequentialist to actually start torturing children for this reason, he would have to know, to a high degree of certainty, that the utility function is maximized by torturing children. It may be that, given that he doesn’t have perfect knowledge, he is incapable of knowing that to the required degree. This would mean that he remains a consequentialist but could not be induced to torture children.
Edit: There’s also the possibility that his decision affects how other people make decisions, which is itself a sort of consequence that has to be weighed. If many of the people around him are deontologists, torturing children may have the side effect of making torturing children more acceptable to the deontologists around him, leading to those deontologists torturing children in cases that have bad consequences.
That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
I am not Omega. I can’t be “certain”.