We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism.
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can see how someone might pick up this definition from context, based on some of the standard examples
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
most of the well known consequentialist dilemmas rely on forbidding considering the path
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
not caring about is one of the premises of the VNM theorem
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
I don’t think that’s a consequentialist thought experiment, though?
What do you mean by “consequentialist thought experiment”?
I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
What do you mean by “consequentialist thought experiment”?
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then…
(a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values?
(b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions).
What process of reflection are you using that you think leads people toward a single value?
Does it avoid the problems with my old one that I described?
Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
why should contemplating tradeoffs between how much we can get values force us to pick one?
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.” I’m not clear on where I claimed otherwise, though… can you point me at that claim?
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
Also:
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Except you defined consequentialism as only caring about consequences.
Yes. What contradiction do you see...?
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
Very carefully.
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Those are two different consequences.
What do you mean? If I dispose of the body well enough I can make the final outcome atom-for-atom identical.
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
My point is that this what I meant by expanding the definition of “consequences” here.
That is the usual meaning; at least, I thought it was. Perhaps what we have here is a sound/sound dispute.
I dunno.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
Sorry, I’m not following.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
That’s your privilege, of course. Thanks for your time.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
So you’re willing to allow summing over feelings of violated boundaries, but not summing over actual violated boundaries, interesting.
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
Well, caring about the path renders the independence axiom meaningless.
Really? Again, I’m not an expert, but …
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
What do you mean by “consequentialist thought experiment”?
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
Well, as Eliezer explained here, simple moral systems are in fact likely to be wrong.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then… (a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values? (b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions). What process of reflection are you using that you think leads people toward a single value? Does it avoid the problems with my old one that I described? Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
Edited: formatting.
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So what, on your view, is the simple thing that humans actually value?
Pleasure, as when humans have enough of it (wireheading) they will like it more than anything else.
(nods) Well, that’s certainly simple.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
Yes, that’s true. If you believe what I’m offering doesn’t exist, it follows that you ought not believe I’ll follow through on that offer.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.”
I’m not clear on where I claimed otherwise, though… can you point me at that claim?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
https://en.wikipedia.org/wiki/Implicature https://en.wikipedia.org/wiki/Cooperative_principle
I see.
OK. Thanks for clearing that up.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
What response were you expecting?
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
Ah, OK. And when you talk about “values”, you mean exclusively the things that control what we like, and not the things that control what we want.
Have I got that right?
That is correct. As I see it, wants aren’t important in themselves, only as far as they’re correlated with and indicate likes.
OK. Thanks for clarifying your position.
How would you test this theory?
Give people pleasure, and see whether they say they like it more than other things they do.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
Do you mean “forcibly wirehead people and see if they decide to remove the pleasure feedback”? Also, see this post.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
“This theory is too simple” is something that can be argued against almost any theory you disagree with. That’s why it’s fully general.
No, it isn’t: Anyone familiar with the linguistic havoc sociological theory of systems deigns to inflict on its victims will assure you of that!
Ok, so what’s an example of something that doesn’t count as a “consequence” by your definition?
Beats me. Why does that matter?
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
So what do you do on counterfactual mugging, or Newcomb’s problem for that matter?
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
???
One of these cases involves the consequence that someone gets killed. How is that not morally neutral?