Please describe the hypothetical person who would be helped at all, or convinced of any proposition, by being invited to reflect on arguments for and against consequentialism of which they were already aware.
Do you not believe in consequentialism? I could provide some arguments for it.
What I mainly believe in is the necessity or arguing for claims
I interpreted this to mean that he believed in consequentialism but did not feel I had sufficiently argued that non-consequentialism is evidence of irrationality. That is, that he was aware of arguments for consequentialism but was choosing not to apply them to the issue.
Maybe this interpretation was wrong, but it was not obviously wrong.
I wouldn’t say that someone “is” irrational because they fail to argue one particular point.
It is just that energy spent asserting that certain ideas are or are no rational would be better spent putting forward arguments. Rationality is something you do.
Then either you can be dutch-booked or you can fail to dutch-book others.
I wouldn’t say that someone “is” irrational because they fail to argue one particular point.
You parsed my sentence wrong.
It is just that energy spent asserting that certain ideas are or are no rational would be better spent putting forward arguments. Rationality is something you do.
There are certain arguments which people on lesswrong are expected to know. Maybe the arguments for consequentialism are not among them?
I would recount them for you, but I don’t really think that will do any good.
I can avoid dutch booking by applying the laws of probabiity correctly. (And in contexts
that have nothing to do with morality). Do you think probability and consequentuialism are somehow the same?
There are certain arguments which people on lesswrong are expected to know. Maybe the arguments for consequentialism are not among them?
I would recount them for you, but I don’t really think that will do any good.
I have been reading the material on ethics and have yet to see such an argument. There is tendency to talk in terms of utility functions, which tends to lend itself to a consequentialist way of thinking, but that is not so much proof as “if the only tool you have is a hammer...”.
I also notice that there are a lot of ethical subjectivists and non cognitivsts on LW. Maybe you could point them to this wonderful proof, if I am beyond hope.
I hear bad things happen if you aren’t a utility maximizer. Utilitarianism doesn’t imply consquentialism, though; you can assign utility depending on whether (sentient?) decision processes choose virtuously and implement your favorite imperative. These ethical systems are consistent.
I find them quite appalling, however. What do you mean, saving four lives is less important than the virtue of not pushing people under trolleys?
Utilitarianism doesn’t imply consquentialism, though; you can assign utility depending on whether (sentient?) decision processes choose virtuously and implement your favorite imperative.
You mean “having a utility function”, not “utilitarianism”. The latter is generally used to mean a specific batch of consequentialist utility functions.
You mean “having a utility function”, not “utilitarianism”. The latter is generally used to mean a specific batch of consequentialist utility functions.
The latter also assumes the possibility of interpersonal utility comparison, which is not the case with von Neumann-Morgenstern utility functions.
I find them quite appalling, however. What do you mean, saving four lives is less important than the virtue of not pushing people under trolleys?
I find simplistic consequentialist views such as this one appalling, if anything because they combine supreme self-assuredness about important problems with ignorance and lack of insight about their vitally important aspects. (See my responses in the Consequentialism FAQ thread for more detail, especially the ones dealing specifically with trolley problems.)
simplistic consequentialist views such as this one
ignorance and lack of insight
Waaah! You’re a meanie mean-head! :( By which I mean: this was a one-sentence reaction to simplistic virtue ethics. I agree it’s not a valid criticism of complex systems like Alicorn’s tiered deontology. I also agree it’s fair to describe this view as simplistic—at the end of the day, I do in fact hold the naive view. I disagree that it can only exist in ignorance of counterarguments. In general, boiling down a position to one sentence provides no way to distinguish between “I don’t know any counterarguments” and “I know counterarguments, all of which I have rejected”.
supreme self-assuredness
Not sure what you mean, I’m going to map it onto “arrogance” until and unless I learn you meant otherwise. Arrogant people are annoying (hi, atheist blogosphere!), but in practice it isn’t correlated with false ideas.
Or is this just a regular accusation of overconfidence, stemming from “Hey, you underestimate the number of arguments you haven’t considered!”?
my responses in the Consequentialism FAQ thread
You go into social-norms-as-Schelling-points in detail (you seem to point at the existence of other strong arguments?); I agree about the basic idea (that’s why I don’t kill for organs). I disagree about how easily we should violate them. (In particular, Near lives are much safer to trade than Far ones.) Even “Only kill without provocation in the exact circumstances of one of the trolley problems” is a feasible change.
Also, least convenient possible world: after the experiment, everyone in the world goes into a holodeck and never interacts with anyone again.
Interestingly, when you said
Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero.
I automatically pictured myself as the fat man, and felt admiration and gratitude for the heroic sociopath. Then I realized you meant a third party, and did feel creeped out. (This is as it should be; I should be more eager to die than to kill, to correct for selfishness.)
By which I mean: this was a one-sentence reaction to simplistic virtue ethics.
Actually, I was writing in favor of “simplistic” virtue ethics. However simplistic and irrational it may seem, and however rational, sophisticated, and logically airtight the consequentialist alternatives may appear to be, folk virtue ethics is a robust and workable way of managing human interaction and coordination, while consequentialist reasoning is usually at best simply wrong and at worst a rationalization of beliefs held for different (and often ugly) reasons.
You can compare it with folk physics vs. scientific physics. The former has many flaws, but even if you’re a physicist, for nearly all things you do in practice, scientific physics is useless, while folk physics works great. (You won’t learn to ride a bike or throw a ball by studying physics, but by honing your folk physics instincts.) While folk physics works robustly and reliably in complex and messy real-world situations, handling them with scientific physics is often intractable and always prone to error.
Of course, this comparison is too favorable. We do know enough scientific physics to apply it to almost any situation at least in principle, and there are many situations where we know how to apply it successfully with real accuracy and rigor, and where folk physics is useless or worse. In contrast, attempts to supersede folk virtue ethics with consequentialism are practically always fallacious one way or another.
So, the fully naive system? Killing makes you a bad person, letting people die is neutral; saving lives makes you a good person, letting people live is neutral. Giving to charity is good, because sacrifice and wanting to help makes you a good person. There are sacred values (e.g. lives) and mundane ones (e.g. money) and trading between them makes you a bad person. What matters is being a good person, not effects like expected number of deaths, so running cost-benefit analyses is at best misguided and at worst evil. Is this a fair description of folk ethics?
If so, I would argue that the bar for doing better is very, very low. There are a zillion biases that apply: scope insensitivity, loss aversion that flips decisions depending on framing, need for closure, pressure to conform, Near/Far discrepancies, fuzzy judgements that mix up feasible and desirable, outright wishful thinking, prejudice against outgroups, overconfidence, and so on. In ethics, unless you’re going to get punished for defecting against a norm, you don’t have a stake, so biases can run free and don’t get any feedback.
Now there are consequentialist arguments for virtue ethics, and general majoritarian-ish arguments for “norms aren’t completely stupid”, so this only argues for “keep roughly the same system but correct for known biases”. But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
And this is a consequentialist argument. “If I try to kill some to save more, I’ll almost certainly overestimate lives saved and underestimate knock-on effects” is a perfectly good argument. “Killing some to save more makes me a bad person”… not so much.
No, because we don’t even know (yet?) how to formulate such a description. The actual decision procedures in our heads have still not been reverse-engineered, and even insofar as they have, they have still not been explained in game-theoretical and other important terms. We have only started to scratch the surface in this respect.
(Note also that there is a big difference between the principles that people will affirm in the abstract and those they apply in practice, and these inconsistencies are also still far from being fully explained.)
But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions. For practically any problem that’s complicated enough to be realistic and interesting, we lack the necessary knowledge and computational resources to to make reliable consequentialist assessments, in terms of QALY or any other standardized measure of welfare. (Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
Moreover, for any problem that is relevant for questions of power, status, wealth, and ideology, it’s practically impossible to avoid biases. At the end, what looks like a dispassionate and perhaps even scientific attempt to evaluate things using some standardized measure of welfare is more likely than not to be just a sophisticated fig-leaf (conscious or not) for some ideological agenda. (Most notably, the majority of what we call “social science” has historically been developed for that purpose.)
Yes, this is a very pessimistic verdict, but an attempt at sound reasoning should start by recognizing the limits of our knowledge.
I agree with much of your worldview as I’ve interpreted it. In particular I agree that:
•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.
•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don’t have explicit understanding.
•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.
•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.
On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.
But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions.
What do you suggest as an alternative to MixedNuts’ suggestion?
As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one’s efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don’t suffice to give a practical applicable ethical theory.
Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don’t pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased.
Regarding this, though:
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are—and therefore they often may seem completely irrational or unfair by other standards.
Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don’t, the plan will backfire—maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.
I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased
I agree. And I like the rest of your response about tacitly shared focal points.
Part of what you may be running up against on LW is people here
(a) Having low intuitive sense for what these focal points are
(b) The existing norms being designed to be tolerable for ‘most people’ and LWers falling outside of ‘most people,’ and correspondingly finding existing norms intolerable with higher than usual frequency.
I know that each of (a) and (b) sometimes apply to me personally
Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..
Okay, I don’t get it. I can only parse what you’re saying one of two ways:
“We don’t have any idea of folk ethics works.” But that’s not true, we know it’s not “whatever emperor Ming says”. We can and do observe folk ethics at work, and notice it favors ingroups, is loss averse, is scope insensitive, etc.
“Any attempt to do better won’t be perfectly free of bias. Therefore, you can’t do better. Therefore, the best you can do is to use folk ethics… which has a bunch of known biases.”
You very likely don’t mean either of these, so I don’t know what you’re trying to say.
These statements are a bit crude and exaggerated version of what I had in mind, but they’re actually not that far off the mark.
The basic human folk ethics, shaped within certain bounds by culture, is amazingly successful in ensuring human coordination and cooperation in practice, at both small and large scales. (The fact that we see its occasional bad failures as dramatic and tragic only shows that we’re used to it working great most of the time.) The key issue here is that these coordination problems are extremely hard and largely beyond our understanding. While we can predict with some accuracy how individual humans behave, the problems of coordinating groups of people involve countless complicated issues of game theory, signaling, etc., about which we’re still largely ignorant. In this sense, we really don’t understand how folk ethics works.
Now, the important thing to note is that various aspects of folk ethics may seem as irrational and biased (in the sense that changing them would have positive consequences by some reasonable measure), while in fact the truth is much more complicated. These “biases” may in fact be essential for the way human coordination works in practice for some reason that’s still mysterious to us. Even if they don’t have any direct useful purpose, it may well be that given the constraints of human minds, eliminating them is impossible without breaking something else badly. (A prime example is that once someone goes down the road of breaking intuitively appealing folk ethics principles in the name of consequentialist calculations, it’s practically certain that these calculations will end up being fatally biased.)
Here I have of course handwaved the question of how exactly successful human cooperation depends on the culture-specific content of people’s folk ethics. That question is fascinating, complicated, and impossible to tackle without opening all sorts of ideologically charged issues. But in any case, it presents even further complications and difficulties for any attempt at analyzing and fixing human intuitions by consequentialist reasoning.
(Also, similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive “rationalist” perspective, but whose role in practice is much more complicated and important.)
similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive “rationalist” perspective, but whose role in practice is much more complicated and important.
Yeah, that seems to be the crux of our disagreement. You still trust people, you haven’t seen them march into death and drag their children along with them and reject a thousand warnings along the way with contempt for such absurd and evil suggestions.
I agree that going against social norms is very costly, that we need cooperation more than ever now there’s seven billion of us, and that if something is bad you still need to coordinate against it. But consider this anecdote:
Many years ago, when I was but a child, I wished to search for the best and rightest politician, and to put them in power. And eagerly did I listen to all, and carefully did I consider their arguments, and honestly did I weight them against history and the evening news. And lo, an ideology was born, and I gave it my allegiance. But still doubts nagged and arguments wavered, and I wished for closure.
One day my politician of choice called for a rally, and to the rally I went; filled with doubt, but willing to serve. And such joy came upon me that I knew I was right; this wave of bliss was the true sign that my cause was just. (For I was but a child, and did not know of laws of entanglement; I knew not human psychology told not of world states.)
Then it came to pass that I read a history textbook, and in the book was an excerpt from Robert Brasillach, who too described this joy, and who too claimed it as proof of his ideology. Which was facism. Oops.
Could you say more about what makes folks ethics a form of virtue ethics (or at least sufficiently virtue-based for you to use the term “folk virtue ethics”)? I can see some aspects of it that are virtue-based, but overall it seems like a hodgepodge of different intuitions/emotions/etc.
Yes, it’s certainly not a clear-cut classification. However, I’d say that the principal mechanisms of folk ethics are very much virtue-based, i.e. they revolve around asking what sort of person acts in a particular way, and what can be inferred about others’ actions and one’s own choice of actions from that.
Your praise for folk ethics would be more persuasive to me, Vladimir, if it came with a description of folk ethics—and if that description explained how folk ethics avoids giving ambiguous answers in many important situations—because it seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.
In other words, although I am sympathetic to arguments for conservatism in matter of interpersonal relationships and social institutions, your argument would be a whole lot stronger if the process of identifying or determining the thing being argued for did not rely entirely on the phrase “folk virtue ethics”.
I don’t think we need to get into any controversial questions about interpersonal relationships and social institutions here. (Although the arguments I’ve made apply to these too.) I’d rather focus on the entirely ordinary, mundane, and uncontroversial instances of human cooperation and coordination. With this in mind, I think you’re making a mistake when you write:
[I]t seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.
In fact, the overwhelming part of folk ethics consists of decisions that are so ordinary and uncontroversial that we don’t even stop to think about them, and of interactions (and the resulting social norms and institutions) that are taken completely for granted by everyone—even though the complexity of the underlying coordination problems is enormous, and the way things really work is still largely mysterious to us. The thesis I’m advancing is that a lot of what may seem like bias and imperfection in folk ethics may in fact somehow be essential for the way these problems get solved, and seemingly airtight consequentialist arguments against clear folk-ethical intuitions may in fact be fatally flawed in this regard. (And I think they nearly always are.)
Now, if we move to the question of what happens in those exceptional situations where there is controversy and conflict, things do get more complicated. Here it’s important to note that the boundary between regular smooth human interactions and conflicts is fuzzy, insofar as the regular interactions often involve conflict resolution in regular and automatic ways, and there are no sharp limits between such events and more overt and dramatic conflict. Also, there is no sharp bound between entirely instinctive folk ethics intuitions and those that are codified in more explicit social (and ultimately legal) norms.
And here we get to the controversies that you mention: the conflict between social and legal norms that embody and formalize folk intuitions of justice, fairness, proper behavior, etc. and evolve spontaneously through tradition, precedent, customary practice, etc., and the attempts to replace such norms by new ones backed by consequentialist arguments. Here, indeed, one can argue in favor of what you call “conservatism in matter of interpersonal relationships and social institutions” using very similar arguments to the mine above. But whether or not you agree with such arguments, my main point can be made without even getting into any controversial issues.
Deciding with a well-behaved preference order includes but is not limited to probability.
Consequentialism doesn’t contradict those philosophies.
It doesn’t follow that I have to adopt consequentialist metaethics in order to avoid being ripped off at the racecourse or stock market.
The arguments I know are, a la MixedNuts, bad things happen if you aren’t a utility maximizer.
Well, I probablty won’t end up with my own utility maximised. What’s that got to do with ethics? It’s quite plausible that I should make sacrifices for ethical reasons.
If I am not utilitarian about X, X is not going to be maximsed. But there are a lot of candidates for X, and they can’t all be maximised at once. Whatever version of consequentialism you adopt, there are going to be non optimal outcome by others. So adopt the right version? Maybe. But that is part of the larger problem of adopting the right metaethics. If deontology or rights theory is true, then you really shouldn’t push the fat guy, and then any form of consequentualism will lead to Bad Things.
Moral: we can’t straightforwardly judge metaethical theories by their tendency to produce good and bad, because we are using them to define good and bad.
There are things which are less-controversially bad than others.
Suppose a deontologist agrees that world A is better than world B.
Then there is, in general, a world C such the deontologist refuses to move from B to C and then refuses to move from C to A, and is thus dragged kicking and screaming into a better world.
I agree that we can use strong and common intuitions to avoid the chicken-and-egg problem, but...
Then there is, in general, a world C such the deontologist refuses to move from B to C and then refuses to move from B to A, and is thus dragged kicking and screaming into a better world
I have no idea what you mean by that.
We don’t have strong intuitions about trolley problems, which is why they are problems.
The problem isn’t lack of intuitions, it’s conflict between them. Agree this makes them useless, but the effects are different—construct a general system from a mostly unrelated set of intuitions vs invalidate some intuitions.
I’m arguing that, if you are a deontologist, for all A such that if the world were in state B you would press a button that changed it to A, this dialogue could occur:
You: “Hi, Omega”
Omega: “The world is currently in state B.. I have a button that changes it to state C.
Wanna press it?”
You: “No, that would be immoral.”
Omega: “Well, I pressed it for you.”
You: “That was an immoral thing you just did!”
Omega: “Well, cheer up. This new button will not only fix my earlier immoral action
and return us to state B, but also bring us to the superior world of world A!”
The parent seems to be correct and the point an obvious one. That is a trait—and arguable weakness—of deontological systems. It doesn’t show that deonotological systems are bad, just explains what the most significant difference is between the actions dictated between vaguely similar utilitarian and deontological value systems.
This sounds suspiciously like evaluating deontology by saying “well, it doesn’t lead to maximum utility.”
In order to make this work you need to justify the properties of utility-maximization that you use from common principles—if these principles (consequentialism being the notable one here, I think) are not accepted, then of course the utilitarian answer won’t be accepted.
Deontology violates the principle “Two wrongs don’t make a right” and this bothers me.
I don’t understand your point here. Deontology can implement all sorts of “two wrongs make a right” rules. It also seems strange to see deontology criticised for violating what appears to be more or less a deontological principle itself.
To be honest it seems like Manfred suggested a quite reasonable way to evaluate deontology:
This sounds suspiciously like evaluating deontology by saying “well, it doesn’t lead to maximum utility.”
Damn right. Deontology makes bad stuff happen. Don’t do it!
I don’t understand your point here. Deontology can implement all sorts of “two wrongs make a right” rules. It also seems strange to see deontology criticised for violating what appears to be more or less a deontological principle itself.
I think you misunderstand what I mean by “Two wrongs don’t make a right”. It’s not a moral rule, it’s a logical (perhaps meta-moral?) rule. It says that if an action is wrong, and another action is wrong, then doing the first action, then the second, in rapid succession is wrong.
With enough logical rules like that, you can prove the existence of a preference order, thus deriving consequentialism.
Damn right. Deontology makes bad stuff happen. Don’t do it!
This is roughly my perspective, of course, I don’t think this argument would convince many deontologists.
This is another way of explaining why some of my posts in this thread are downvoted.
This is roughly my perspective, of course, I don’t think this argument would convince many deontologists.
Of course not. (I don’t find it all that useful to try to convince people to not have objectionable preferences of any kind. It does not tend to work.)
This is another way of explaining why some of my posts in this thread are downvoted.
Because you are arguing with deontologists? That was approximately my conclusion.
A = the world of today
B = the world of today, but all of Bill Gate’s money is now Alicorn’s money
C = the world of today, but everyone also owns a delicious chocolate-chip cookie
Moving from A=>B violates Bill Gates’s rights.
Moving from B=>C violates your rights.
isn’t preventing the existence of people who have stolen a consequentialist goal?
Taking into account the existence of people who have stolen is one way for a consequentialist to model the thinking of deontologists. If a consequentialist includes history of who-did-what-to-whom in his world states, he is capturing all of the information that a deontologist considers. Now, all that is left is to construct a utility function that attaches value to the history in the way that a deontologist would.
Voila! Something that approximates successful communication between deontologist and consequentialist.
Unfortunately, all I can do is imagine a heated contest between two people over which of them is going to do some evil action XYZ that is going to be done regardless. They each want to ensure that they don’t do it, but for some reason it will necessarily be done, so they come to blows over it.
I may, in fact, be constitutionally incapable of successful communication with deontologists.
I’m not following you. Why is evil action XYZ going to be done regardless? Are you imagining that deontologists seek to have other people do their dirty deeds for them?
Well, exactly. It’s a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what “who-did-what-to-whom” means, a sufficiently clever reason why would be constructed.
Maybe it must be done to prevent bad stuff.
Maybe it’s a fact of the psychology of these two individuals that one of them is going to do it.
Maybe an AI in a box is going to convince one of two people with the power to release it, to release it—this is sort of like the last one?
Well, exactly. It’s a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what “who-did-what-to-whom” means, a sufficiently clever reason why would be constructed.
Maybe it must be done to prevent bad stuff.
Maybe it’s a fact of the psychology of these two individuals that one of them is going to do it.
Maybe an AI in a box is going to convince one of two people with the power to release it, to release it—this is sort of like the last one?
Well, exactly. It’s a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what “who-did-what-to-whom” means, a sufficiently clever reason why would be constructed.
Maybe it must be done to prevent bad stuff.
Maybe it’s a fact of the psychology of these two individuals that one of them is going to do it.
Maybe an AI in a box is going to convince one of two people with the power to release it, to release it—this is sort of like the last one?
That is still hard to follow[*]. You seem to be saying that if if a deontologist has the
rule “don’t make the world worse” they must also have a rule “don’t make the world better”. I can’t think of the slightest justification of that.
[*} And I have no idea how anyone is supposed to work out the scenario in the parent from the potted version in the great grand parent.
No, this is not the case. You have to cleverly choose B.
So let’s say, in both A and C, Eliezer Yudkowsky has a sac of gold. In B, Yvain has that sack of gold.
In one deontological morality, stealing gold from Eliezer and giving it to Yvain is always immoral, as is the opposite-directional theft.
This means that changing from A to B and changing from B to C are immoral
(The fundamental problem here is that, while I am driven to respond to your comments, I am not driven to put much effort into those responses. I am still not sure which behavior to change, but together they are certainly pathological.)
I don’t hold to that one deontological morality. I think Jean Valjean was right to steal the bread. I think values/rules/duties tend to conflict, and resolution of such conflicts need values/rules/duties to be arranged hierarchically. Thus the rightness of preventing his nephews starvation overrides the wrongness of stealing the bread. ( “However, there is a difference between deontological ethics and moral absolutism” )
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
I don’t have to have an exact morality to be sceptical of the idea that consequentialism is the One True Theory.
This reply does not fit the context. If Will is asked to instantiate from a general principle to a specific example then it is not reasonable to declare the general principle null because the specific example does not apply to the morality you happen to be thinking of.
(And the “One True Theory” business a far less subtle straw man.)
If it’s OK to make a transition because of the nature of the transition (it’s an action which follows certain rules, respects certain rights, arises from certain intentions). then there is no need to re-explain the ordering of A, B and C in terms of anything about the states themselves—the ordering is derived from the transitions.
But if the properties of the transitions can be derived from the properties of the states, then it’s so much SIMPLER to talk about good states than good transitions.
Simplicity is tangential here; we are discussing what is right, not how to most efficiently determine it.
In what circumstances do you two actually disagree as to what one should do (I expect Peter to be more likely to answer this well as he is more familiar with typical LessWrongian utilitarianisms than Will is with Peter’s particular deontology)?
If those axioms hold, then a consequentialist moral framework is right.
You can argue that those axioms hold and yet consequentialism is not the One True Moral Theory, but it seems like an odd position to take on a purely definitional level.
(also, Robert Nozick violates those axioms, if anyone still cares about Robert Nozick, and the bag-of-gold example works on him)
If those axioms hold, then a consequentialist moral framework is right.
I don’t see why. Why would the existence of an ordering of states be a sufficient condition for consequentualism? And didn’t you need the additional argument about simplicity to make that work?
And if i can show that consequentialism needs to be combined with rules (or something else), does that prove consequentialism is really deontology (or something else)? It is rather easy to show that any one-legged approach is flawed, but if end up with a mixed theory we should not label it as a one-legged theory.
Considering that: this whole discussion was about how Robert Nozick isn’t (wasn’t?) a consequentialist, I think for these purposes we should classify his views as not consequentialism.
Perhaps an example of what I mean will be helpful.
Suppose your friend is kidnapped and being held for ransom. Naive consequentialism says you should pay because you value his life more then the money. TDT says you shouldn’t pay because paying counterfactually causes him to be kidnapped.
Note how in the scenario the TDT argument sounds very deontological.
“Consequences” only in a counterfactual world. I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system. In particular by your definition Kant’s categorical imperative is consequentialist since it involves looking at the consequences of your actions in the hypothetical world where everyone performs them.
Yes, in that TDT-like decision/ethical theories are basically “consequentialism in which you must consider ‘acausal consequences’”.
While it may seem strange to regard ethical theories that apply Kant’s CI as “consequentialist”, it’s even stranger to call them deontological, because there is no deontic-like “rule set” they can be said to following; it’s all simple maximization, albeit with a different definition of what you count as a benefit. TDT, for example, considers not only what your action causes (in the technical sense of future results), but the implications of the decision theory you instantiate having a particular output.
(I know there are a lot of comments I need to reply to, I will get to them, be patient.)
While it may seem strange to regard ethical theories that apply Kant’s CI as “consequentialist”, it’s even stranger to call them deontological, because there is no deontic-like “rule set” they can be said to following;
It certainly is strange even if it is trivially possible. Any ‘consequentialist’ system can be implemented in a singleton deontological ‘rule set’. In fact, that’s the primary redeeming feature of deontology. Kind of like the best thing about Java is that you can use it to implement JRuby and bypass all of Java’s petty restrictions and short sighted rigidly enforced norms.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I’m not sure this is right).
In both cases, while computing them you never assume anything which you know to be false
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen. Omega’s coin didn’t come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.
I think counterfactual mugging was originally brought up in the context of problems which TDT doesn’t solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn’t pay.
Well that might explain some of our miscommunication. I’ll go back and check.
Consequences” only in a counterfactual world. . I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system.
This makes sense using the first definition, at least, according to TDT it does.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
This is clearly using the first definition.
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don’t know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don’t know prior to making it.
Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don’t (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).
Let’s say I have to decide what to do at 2′o’clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don’t, then I won’t. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.
I’m not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb’s problem or Prisoner’s dilemma, but it breaks down for Transparent Newcomb or Parfit’s Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.
Well you always know that one of your counterfactuals is true.
There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.
Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn’t that represent a failure of Omega’s prediction?
It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you’d be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn’t actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don’t exist, which you can easily obtain in a variation on Transparent Newcomb.
When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn’t care about whether the agent “actually exists”, and focus on what it knows instead.
All I’m saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.
You could possibly fix that by saying Omega isn’t perfect, but his predictions are correlated enough with your decision to make precomittment possible.
That is not my understanding. The only necessary addition to physics is “any possible mechanism of varying any element in your model of the universe”. ie. You need physics and a tiny amount of closely related mathematics. That will give you a function that gives you every possible action → result pair.
I believe this only serves to strengthen your main point about the possibility of separating epistemic investigation from ethics entirely.
“any possible mechanism of varying any element in your model of the universe”.
That’s a decision theory. For instance, if you perform causal surgery, that’s CDT. If you change all computationally identical elements, that’s TDT. And so on.
That’s a decision theory. For instance, if you perform causal surgery, that’s CDT. If you change all computationally identical elements, that’s TDT. And so on.
I don’t agree. A decision theory will sometimes require the production of action result pairs, as is the case with CDT, TDT and any other the decision algorithm with a consequentialist component. Yet not all production of such pairs is a ‘decision theory’. A full mathematical model of every possible state to the outcomes produced is not a decision theory in any meaningful sense. It is just a solid understanding of all of physics.
On one hand we have (physics + the ability to consider counterfactuals) and on the other we have systems for choosing specific counterfactuals to consider and compare.
If you don’t have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?
If you don’t have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?
That is my point. That is what the decision theory is for!
This conversation is kinda pointless. Therefore, my response comes in a short version and a long version.
Short:
Sorry, that was unclear. I did not make the mistake your last post implies I made. I’m pretty sure you’ve made some mistakes, but they’re really minor. We have nothing left to discuss.
Long:
Sorry, that was unclear.
The first time I posted it, it was a response to Eugene. Then you responded, criticizing it. Then, finally, it appears like we agree, so I reassert my original claim to make sure. In that context, this response is strange:
Ok, and it is still a claim that doesn’t refute anything I have previously said.
I wasn’t trying to refute you with this claim, I was trying to refute Eugene, then you tried to refute the claim.
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
According to the search tool, this was Less Wrong’s first use of “STFU” directed at another contributor. I’m pretty proud of the site for having avoided this term, and I’m pretty chagrined at you for having broken the streak.
Voted down vigorously. If you can’t make the effort to make yourself understood, STFU.
It should be no surprise that this outburst made me far more inclined to view the grandparent in a positive light. In this case the actual content of Will’s comment seems easy to understand. Given Peterdjones aggressive use of his own incomprehention Will was rather more patient than he could have been. He could have linking to a wikipedia article on the subject so that he could get a grasp of the basics.
Will was rather more patient than he could have been.
Rather less careful, I would say. He failed to notice the typo above until nsheperd pointed it out—the original source of the confusion. And then later he began a comment with:
No, this is not the case. You have to cleverly choose B.
I have no idea at all what “is not the case”. And I also don’t know when anyone was offered the opportunity to cleverly choose B.
Will’s description of his own limited motivation to communicate is the only portion of this thread which is crystal clear.
Yes, by working pretty hard, I was able to ignore the initial typo and to anticipate the explanation of A, B, and C. As I point out elsewhere on this thread, I have some objections to the scenario (as leaving out some details important to deontologists). Perhaps PeterDJones had similar objections. Please notice that neither of us could object to Will’s A-B-C story until it was actually spelled out. And Will resisted making the effort of spelling it out far too long.
My “STFU” was rude. But sometimes rudeness is appropriate.
It seems to me the substance of Mr Savin’s objection could have been expressed more briefly and clearly as “Deontologists would not steal under any circumstances”. (Or even the familiar “Deontologists would not lie under any circumstances, even to save a lfie”).
It seems to me the substance of Mr Savin’s objection could have been expressed more briefly and clearly as “Deontologists would not steal under any circumstances”.
That does not appear to be the case. Those are examples of other things that he could have said which would provide a more convenient target for your reply. Assuming you refer to Will_Sawin, that is.
I had thought that people on lesswrong would be aware of the actual arguments.
Please describe the hypothetical person who would be helped at all, or convinced of any proposition, by being invited to reflect on arguments for and against consequentialism of which they were already aware.
Peterdjones, who says that:
I interpreted this to mean that he believed in consequentialism but did not feel I had sufficiently argued that non-consequentialism is evidence of irrationality. That is, that he was aware of arguments for consequentialism but was choosing not to apply them to the issue.
Maybe this interpretation was wrong, but it was not obviously wrong.
I don’t particularly believe in consequentialism.
I wouldn’t say that someone “is” irrational because they fail to argue one particular point.
It is just that energy spent asserting that certain ideas are or are no rational would be better spent putting forward arguments. Rationality is something you do.
Then either you can be dutch-booked or you can fail to dutch-book others.
You parsed my sentence wrong.
There are certain arguments which people on lesswrong are expected to know. Maybe the arguments for consequentialism are not among them?
I would recount them for you, but I don’t really think that will do any good.
I can avoid dutch booking by applying the laws of probabiity correctly. (And in contexts that have nothing to do with morality). Do you think probability and consequentuialism are somehow the same?
I have been reading the material on ethics and have yet to see such an argument. There is tendency to talk in terms of utility functions, which tends to lend itself to a consequentialist way of thinking, but that is not so much proof as “if the only tool you have is a hammer...”.
I also notice that there are a lot of ethical subjectivists and non cognitivsts on LW. Maybe you could point them to this wonderful proof, if I am beyond hope.
It’s not the most sophisticated form of the argument, but Yvain’s recent Consequentialism FAQ is an excellent summary and a good read.
I hear bad things happen if you aren’t a utility maximizer. Utilitarianism doesn’t imply consquentialism, though; you can assign utility depending on whether (sentient?) decision processes choose virtuously and implement your favorite imperative. These ethical systems are consistent.
I find them quite appalling, however. What do you mean, saving four lives is less important than the virtue of not pushing people under trolleys?
You mean “having a utility function”, not “utilitarianism”. The latter is generally used to mean a specific batch of consequentialist utility functions.
The latter also assumes the possibility of interpersonal utility comparison, which is not the case with von Neumann-Morgenstern utility functions.
I find simplistic consequentialist views such as this one appalling, if anything because they combine supreme self-assuredness about important problems with ignorance and lack of insight about their vitally important aspects. (See my responses in the Consequentialism FAQ thread for more detail, especially the ones dealing specifically with trolley problems.)
Waaah! You’re a meanie mean-head! :( By which I mean: this was a one-sentence reaction to simplistic virtue ethics. I agree it’s not a valid criticism of complex systems like Alicorn’s tiered deontology. I also agree it’s fair to describe this view as simplistic—at the end of the day, I do in fact hold the naive view. I disagree that it can only exist in ignorance of counterarguments. In general, boiling down a position to one sentence provides no way to distinguish between “I don’t know any counterarguments” and “I know counterarguments, all of which I have rejected”.
Not sure what you mean, I’m going to map it onto “arrogance” until and unless I learn you meant otherwise. Arrogant people are annoying (hi, atheist blogosphere!), but in practice it isn’t correlated with false ideas.
Or is this just a regular accusation of overconfidence, stemming from “Hey, you underestimate the number of arguments you haven’t considered!”?
You go into social-norms-as-Schelling-points in detail (you seem to point at the existence of other strong arguments?); I agree about the basic idea (that’s why I don’t kill for organs). I disagree about how easily we should violate them. (In particular, Near lives are much safer to trade than Far ones.) Even “Only kill without provocation in the exact circumstances of one of the trolley problems” is a feasible change.
Also, least convenient possible world: after the experiment, everyone in the world goes into a holodeck and never interacts with anyone again.
Interestingly, when you said
I automatically pictured myself as the fat man, and felt admiration and gratitude for the heroic sociopath. Then I realized you meant a third party, and did feel creeped out. (This is as it should be; I should be more eager to die than to kill, to correct for selfishness.)
Actually, I was writing in favor of “simplistic” virtue ethics. However simplistic and irrational it may seem, and however rational, sophisticated, and logically airtight the consequentialist alternatives may appear to be, folk virtue ethics is a robust and workable way of managing human interaction and coordination, while consequentialist reasoning is usually at best simply wrong and at worst a rationalization of beliefs held for different (and often ugly) reasons.
You can compare it with folk physics vs. scientific physics. The former has many flaws, but even if you’re a physicist, for nearly all things you do in practice, scientific physics is useless, while folk physics works great. (You won’t learn to ride a bike or throw a ball by studying physics, but by honing your folk physics instincts.) While folk physics works robustly and reliably in complex and messy real-world situations, handling them with scientific physics is often intractable and always prone to error.
Of course, this comparison is too favorable. We do know enough scientific physics to apply it to almost any situation at least in principle, and there are many situations where we know how to apply it successfully with real accuracy and rigor, and where folk physics is useless or worse. In contrast, attempts to supersede folk virtue ethics with consequentialism are practically always fallacious one way or another.
So, the fully naive system? Killing makes you a bad person, letting people die is neutral; saving lives makes you a good person, letting people live is neutral. Giving to charity is good, because sacrifice and wanting to help makes you a good person. There are sacred values (e.g. lives) and mundane ones (e.g. money) and trading between them makes you a bad person. What matters is being a good person, not effects like expected number of deaths, so running cost-benefit analyses is at best misguided and at worst evil. Is this a fair description of folk ethics?
If so, I would argue that the bar for doing better is very, very low. There are a zillion biases that apply: scope insensitivity, loss aversion that flips decisions depending on framing, need for closure, pressure to conform, Near/Far discrepancies, fuzzy judgements that mix up feasible and desirable, outright wishful thinking, prejudice against outgroups, overconfidence, and so on. In ethics, unless you’re going to get punished for defecting against a norm, you don’t have a stake, so biases can run free and don’t get any feedback.
Now there are consequentialist arguments for virtue ethics, and general majoritarian-ish arguments for “norms aren’t completely stupid”, so this only argues for “keep roughly the same system but correct for known biases”. But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
And this is a consequentialist argument. “If I try to kill some to save more, I’ll almost certainly overestimate lives saved and underestimate knock-on effects” is a perfectly good argument. “Killing some to save more makes me a bad person”… not so much.
No, because we don’t even know (yet?) how to formulate such a description. The actual decision procedures in our heads have still not been reverse-engineered, and even insofar as they have, they have still not been explained in game-theoretical and other important terms. We have only started to scratch the surface in this respect.
(Note also that there is a big difference between the principles that people will affirm in the abstract and those they apply in practice, and these inconsistencies are also still far from being fully explained.)
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions. For practically any problem that’s complicated enough to be realistic and interesting, we lack the necessary knowledge and computational resources to to make reliable consequentialist assessments, in terms of QALY or any other standardized measure of welfare. (Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
Moreover, for any problem that is relevant for questions of power, status, wealth, and ideology, it’s practically impossible to avoid biases. At the end, what looks like a dispassionate and perhaps even scientific attempt to evaluate things using some standardized measure of welfare is more likely than not to be just a sophisticated fig-leaf (conscious or not) for some ideological agenda. (Most notably, the majority of what we call “social science” has historically been developed for that purpose.)
Yes, this is a very pessimistic verdict, but an attempt at sound reasoning should start by recognizing the limits of our knowledge.
I agree with much of your worldview as I’ve interpreted it. In particular I agree that:
•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.
•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don’t have explicit understanding.
•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.
•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.
On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.
What do you suggest as an alternative to MixedNuts’ suggestion?
As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one’s efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don’t suffice to give a practical applicable ethical theory.
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don’t pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased.
Regarding this, though:
The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are—and therefore they often may seem completely irrational or unfair by other standards.
Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don’t, the plan will backfire—maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.
Thanks for your response!
I agree. And I like the rest of your response about tacitly shared focal points.
Part of what you may be running up against on LW is people here (a) Having low intuitive sense for what these focal points are (b) The existing norms being designed to be tolerable for ‘most people’ and LWers falling outside of ‘most people,’ and correspondingly finding existing norms intolerable with higher than usual frequency.
I know that each of (a) and (b) sometimes apply to me personally
Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..
Okay, I don’t get it. I can only parse what you’re saying one of two ways:
“We don’t have any idea of folk ethics works.” But that’s not true, we know it’s not “whatever emperor Ming says”. We can and do observe folk ethics at work, and notice it favors ingroups, is loss averse, is scope insensitive, etc.
“Any attempt to do better won’t be perfectly free of bias. Therefore, you can’t do better. Therefore, the best you can do is to use folk ethics… which has a bunch of known biases.”
You very likely don’t mean either of these, so I don’t know what you’re trying to say.
These statements are a bit crude and exaggerated version of what I had in mind, but they’re actually not that far off the mark.
The basic human folk ethics, shaped within certain bounds by culture, is amazingly successful in ensuring human coordination and cooperation in practice, at both small and large scales. (The fact that we see its occasional bad failures as dramatic and tragic only shows that we’re used to it working great most of the time.) The key issue here is that these coordination problems are extremely hard and largely beyond our understanding. While we can predict with some accuracy how individual humans behave, the problems of coordinating groups of people involve countless complicated issues of game theory, signaling, etc., about which we’re still largely ignorant. In this sense, we really don’t understand how folk ethics works.
Now, the important thing to note is that various aspects of folk ethics may seem as irrational and biased (in the sense that changing them would have positive consequences by some reasonable measure), while in fact the truth is much more complicated. These “biases” may in fact be essential for the way human coordination works in practice for some reason that’s still mysterious to us. Even if they don’t have any direct useful purpose, it may well be that given the constraints of human minds, eliminating them is impossible without breaking something else badly. (A prime example is that once someone goes down the road of breaking intuitively appealing folk ethics principles in the name of consequentialist calculations, it’s practically certain that these calculations will end up being fatally biased.)
Here I have of course handwaved the question of how exactly successful human cooperation depends on the culture-specific content of people’s folk ethics. That question is fascinating, complicated, and impossible to tackle without opening all sorts of ideologically charged issues. But in any case, it presents even further complications and difficulties for any attempt at analyzing and fixing human intuitions by consequentialist reasoning.
(Also, similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive “rationalist” perspective, but whose role in practice is much more complicated and important.)
Yeah, that seems to be the crux of our disagreement. You still trust people, you haven’t seen them march into death and drag their children along with them and reject a thousand warnings along the way with contempt for such absurd and evil suggestions.
I agree that going against social norms is very costly, that we need cooperation more than ever now there’s seven billion of us, and that if something is bad you still need to coordinate against it. But consider this anecdote:
Many years ago, when I was but a child, I wished to search for the best and rightest politician, and to put them in power. And eagerly did I listen to all, and carefully did I consider their arguments, and honestly did I weight them against history and the evening news. And lo, an ideology was born, and I gave it my allegiance. But still doubts nagged and arguments wavered, and I wished for closure.
One day my politician of choice called for a rally, and to the rally I went; filled with doubt, but willing to serve. And such joy came upon me that I knew I was right; this wave of bliss was the true sign that my cause was just. (For I was but a child, and did not know of laws of entanglement; I knew not human psychology told not of world states.)
Then it came to pass that I read a history textbook, and in the book was an excerpt from Robert Brasillach, who too described this joy, and who too claimed it as proof of his ideology. Which was facism. Oops.
So, yeah, never falling for that one again.
Could you say more about what makes folks ethics a form of virtue ethics (or at least sufficiently virtue-based for you to use the term “folk virtue ethics”)? I can see some aspects of it that are virtue-based, but overall it seems like a hodgepodge of different intuitions/emotions/etc.
Yes, it’s certainly not a clear-cut classification. However, I’d say that the principal mechanisms of folk ethics are very much virtue-based, i.e. they revolve around asking what sort of person acts in a particular way, and what can be inferred about others’ actions and one’s own choice of actions from that.
Your praise for folk ethics would be more persuasive to me, Vladimir, if it came with a description of folk ethics—and if that description explained how folk ethics avoids giving ambiguous answers in many important situations—because it seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.
In other words, although I am sympathetic to arguments for conservatism in matter of interpersonal relationships and social institutions, your argument would be a whole lot stronger if the process of identifying or determining the thing being argued for did not rely entirely on the phrase “folk virtue ethics”.
I don’t think we need to get into any controversial questions about interpersonal relationships and social institutions here. (Although the arguments I’ve made apply to these too.) I’d rather focus on the entirely ordinary, mundane, and uncontroversial instances of human cooperation and coordination. With this in mind, I think you’re making a mistake when you write:
In fact, the overwhelming part of folk ethics consists of decisions that are so ordinary and uncontroversial that we don’t even stop to think about them, and of interactions (and the resulting social norms and institutions) that are taken completely for granted by everyone—even though the complexity of the underlying coordination problems is enormous, and the way things really work is still largely mysterious to us. The thesis I’m advancing is that a lot of what may seem like bias and imperfection in folk ethics may in fact somehow be essential for the way these problems get solved, and seemingly airtight consequentialist arguments against clear folk-ethical intuitions may in fact be fatally flawed in this regard. (And I think they nearly always are.)
Now, if we move to the question of what happens in those exceptional situations where there is controversy and conflict, things do get more complicated. Here it’s important to note that the boundary between regular smooth human interactions and conflicts is fuzzy, insofar as the regular interactions often involve conflict resolution in regular and automatic ways, and there are no sharp limits between such events and more overt and dramatic conflict. Also, there is no sharp bound between entirely instinctive folk ethics intuitions and those that are codified in more explicit social (and ultimately legal) norms.
And here we get to the controversies that you mention: the conflict between social and legal norms that embody and formalize folk intuitions of justice, fairness, proper behavior, etc. and evolve spontaneously through tradition, precedent, customary practice, etc., and the attempts to replace such norms by new ones backed by consequentialist arguments. Here, indeed, one can argue in favor of what you call “conservatism in matter of interpersonal relationships and social institutions” using very similar arguments to the mine above. But whether or not you agree with such arguments, my main point can be made without even getting into any controversial issues.
Deciding with a well-behaved preference order includes but is not limited to probability.
Consequentialism doesn’t contradict those philosophies.
The arguments I know are, a la MixedNuts, bad things happen if you aren’t a utility maximizer.
You can maximize a subjective utility function.
It doesn’t follow that I have to adopt consequentialist metaethics in order to avoid being ripped off at the racecourse or stock market.
Well, I probablty won’t end up with my own utility maximised. What’s that got to do with ethics? It’s quite plausible that I should make sacrifices for ethical reasons.
Please don’t use “metaethics” as a word for ethics.
You will sacrifice and no one else will benefit.
If I am not utilitarian about X, X is not going to be maximsed. But there are a lot of candidates for X, and they can’t all be maximised at once. Whatever version of consequentialism you adopt, there are going to be non optimal outcome by others. So adopt the right version? Maybe. But that is part of the larger problem of adopting the right metaethics. If deontology or rights theory is true, then you really shouldn’t push the fat guy, and then any form of consequentualism will lead to Bad Things.
Moral: we can’t straightforwardly judge metaethical theories by their tendency to produce good and bad, because we are using them to define good and bad.
There are things which are less-controversially bad than others.
Suppose a deontologist agrees that world A is better than world B.
Then there is, in general, a world C such the deontologist refuses to move from B to C and then refuses to move from C to A, and is thus dragged kicking and screaming into a better world.
Do you mean from B to C and then C to A?
Fixed, thanks.
I agree that we can use strong and common intuitions to avoid the chicken-and-egg problem, but...
I have no idea what you mean by that.
We don’t have strong intuitions about trolley problems, which is why they are problems.
I’ve never met a person who didn’t have one. They’re problems because we have strong, different intuitions.
Didn’t have one what?
And where intuitions are strong and varying, we can’t use them to decide between ethical systems.
Who didn’t have a strong intution.
The problem isn’t lack of intuitions, it’s conflict between them. Agree this makes them useless, but the effects are different—construct a general system from a mostly unrelated set of intuitions vs invalidate some intuitions.
Hmm. There’s plenty of conflict between abortion is right/wrong,. and very little between murder is right/wrong.
But plenty of conflict on what is/isn’t murder.
I’m arguing that, if you are a deontologist, for all A such that if the world were in state B you would press a button that changed it to A, this dialogue could occur:
You: “Hi, Omega”
Omega: “The world is currently in state B.. I have a button that changes it to state C. Wanna press it?”
You: “No, that would be immoral.”
Omega: “Well, I pressed it for you.”
You: “That was an immoral thing you just did!”
Omega: “Well, cheer up. This new button will not only fix my earlier immoral action and return us to state B, but also bring us to the superior world of world A!”
You: “Sounds awesome.”
Omega: “Wanna press it?”
You: “No, that would be immoral.”
The parent seems to be correct and the point an obvious one. That is a trait—and arguable weakness—of deontological systems. It doesn’t show that deonotological systems are bad, just explains what the most significant difference is between the actions dictated between vaguely similar utilitarian and deontological value systems.
This sounds suspiciously like evaluating deontology by saying “well, it doesn’t lead to maximum utility.”
In order to make this work you need to justify the properties of utility-maximization that you use from common principles—if these principles (consequentialism being the notable one here, I think) are not accepted, then of course the utilitarian answer won’t be accepted.
I’m using something along the lines of transitivity.
Deontology violates the principle “Two wrongs don’t make a right” and this bothers me.
I don’t understand your point here. Deontology can implement all sorts of “two wrongs make a right” rules. It also seems strange to see deontology criticised for violating what appears to be more or less a deontological principle itself.
To be honest it seems like Manfred suggested a quite reasonable way to evaluate deontology:
Damn right. Deontology makes bad stuff happen. Don’t do it!
I think you misunderstand what I mean by “Two wrongs don’t make a right”. It’s not a moral rule, it’s a logical (perhaps meta-moral?) rule. It says that if an action is wrong, and another action is wrong, then doing the first action, then the second, in rapid succession is wrong.
With enough logical rules like that, you can prove the existence of a preference order, thus deriving consequentialism.
This is roughly my perspective, of course, I don’t think this argument would convince many deontologists.
This is another way of explaining why some of my posts in this thread are downvoted.
Of course not. (I don’t find it all that useful to try to convince people to not have objectionable preferences of any kind. It does not tend to work.)
Because you are arguing with deontologists? That was approximately my conclusion.
Because I am doing so poorly.
I don’t follow. Can you give a more specific example for A, B, and C?
A = the world of today B = the world of today, but all of Bill Gate’s money is now Alicorn’s money C = the world of today, but everyone also owns a delicious chocolate-chip cookie
Moving from A=>B violates Bill Gates’s rights. Moving from B=>C violates your rights.
Does world B contain someone who stole Bill’s money? Does world C contain someone who stole Alicorn’s money?
One reason that you are having trouble seeing the world as a deontologist sees it is that you stubbornly refuse to even try.
In the example, yes, Omega, and yes, peterdjones.
But isn’t preventing the existence of people who have stolen a consequentialist goal?
Taking into account the existence of people who have stolen is one way for a consequentialist to model the thinking of deontologists. If a consequentialist includes history of who-did-what-to-whom in his world states, he is capturing all of the information that a deontologist considers. Now, all that is left is to construct a utility function that attaches value to the history in the way that a deontologist would.
Voila! Something that approximates successful communication between deontologist and consequentialist.
Unfortunately, all I can do is imagine a heated contest between two people over which of them is going to do some evil action XYZ that is going to be done regardless. They each want to ensure that they don’t do it, but for some reason it will necessarily be done, so they come to blows over it.
I may, in fact, be constitutionally incapable of successful communication with deontologists.
I’m not following you. Why is evil action XYZ going to be done regardless? Are you imagining that deontologists seek to have other people do their dirty deeds for them?
Well, exactly. It’s a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what “who-did-what-to-whom” means, a sufficiently clever reason why would be constructed.
Maybe it must be done to prevent bad stuff.
Maybe it’s a fact of the psychology of these two individuals that one of them is going to do it.
Maybe an AI in a box is going to convince one of two people with the power to release it, to release it—this is sort of like the last one?
Well, exactly. It’s a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what “who-did-what-to-whom” means, a sufficiently clever reason why would be constructed.
Maybe it must be done to prevent bad stuff.
Maybe it’s a fact of the psychology of these two individuals that one of them is going to do it.
Maybe an AI in a box is going to convince one of two people with the power to release it, to release it—this is sort of like the last one?
Well, exactly. It’s a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what “who-did-what-to-whom” means, a sufficiently clever reason why would be constructed.
Maybe it must be done to prevent bad stuff.
Maybe it’s a fact of the psychology of these two individuals that one of them is going to do it.
Maybe an AI in a box is going to convince one of two people with the power to release it, to release it—this is sort of like the last one?
That is still hard to follow[*]. You seem to be saying that if if a deontologist has the rule “don’t make the world worse” they must also have a rule “don’t make the world better”. I can’t think of the slightest justification of that.
[*} And I have no idea how anyone is supposed to work out the scenario in the parent from the potted version in the great grand parent.
No, this is not the case. You have to cleverly choose B.
So let’s say, in both A and C, Eliezer Yudkowsky has a sac of gold. In B, Yvain has that sack of gold.
In one deontological morality, stealing gold from Eliezer and giving it to Yvain is always immoral, as is the opposite-directional theft.
This means that changing from A to B and changing from B to C are immoral
(The fundamental problem here is that, while I am driven to respond to your comments, I am not driven to put much effort into those responses. I am still not sure which behavior to change, but together they are certainly pathological.)
I don’t hold to that one deontological morality. I think Jean Valjean was right to steal the bread. I think values/rules/duties tend to conflict, and resolution of such conflicts need values/rules/duties to be arranged hierarchically. Thus the rightness of preventing his nephews starvation overrides the wrongness of stealing the bread. ( “However, there is a difference between deontological ethics and moral absolutism” )
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
I don’t have to have an exact morality to be sceptical of the idea that consequentialism is the One True Theory.
This reply does not fit the context. If Will is asked to instantiate from a general principle to a specific example then it is not reasonable to declare the general principle null because the specific example does not apply to the morality you happen to be thinking of.
(And the “One True Theory” business a far less subtle straw man.)
Suppose you have a system with some set of states, such that changing from state A to state B is either OK or not OK.
Then assuming you accept:
then you get a preference order on the states. Presto, consequentialism.
If it’s OK to make a transition because of the nature of the transition (it’s an action which follows certain rules, respects certain rights, arises from certain intentions). then there is no need to re-explain the ordering of A, B and C in terms of anything about the states themselves—the ordering is derived from the transitions.
But if the properties of the transitions can be derived from the properties of the states, then it’s so much SIMPLER to talk about good states than good transitions.
Simplicity is tangential here; we are discussing what is right, not how to most efficiently determine it.
In what circumstances do you two actually disagree as to what one should do (I expect Peter to be more likely to answer this well as he is more familiar with typical LessWrongian utilitarianisms than Will is with Peter’s particular deontology)?
Well, a better way to frame what I said is:
If those axioms hold, then a consequentialist moral framework is right.
You can argue that those axioms hold and yet consequentialism is not the One True Moral Theory, but it seems like an odd position to take on a purely definitional level.
(also, Robert Nozick violates those axioms, if anyone still cares about Robert Nozick, and the bag-of-gold example works on him)
I don’t see why. Why would the existence of an ordering of states be a sufficient condition for consequentualism? And didn’t you need the additional argument about simplicity to make that work?
So consequentialism says “doing right is making good”. But it doesn’t say what “making good” means. So it’s a family of moral theories.
What moral theories are part of the consequentialist family? All theories that can be expressed as “doing right is making X” for some X.
If I show that your moral theory can be expressed in that manner, I show that you are, in this sense, a consequentialist.
And if i can show that consequentialism needs to be combined with rules (or something else), does that prove consequentialism is really deontology (or something else)? It is rather easy to show that any one-legged approach is flawed, but if end up with a mixed theory we should not label it as a one-legged theory.
Then you should end up violating one of the axioms and getting a not-consequentialism.
All consequentialist theories produce a set of rules.
The right way to define “deontology”, then, is a theory that is a set of rules that couldn’t be consequentialist.
if you mix consequentialism and deontology, you get deontology.
If you mix consequentialism and deontology you get Nozickian side-constraints consequentialism.
Good example. You could have consequnentialism about what you should do, and deontology about what you should refrain from.
Considering that: this whole discussion was about how Robert Nozick isn’t (wasn’t?) a consequentialist, I think for these purposes we should classify his views as not consequentialism.
Would you count Timeless Decision Theory as deontological since it isn’t pure consequentialism?
No, it’s a decision theory, not an ethical theory.
I don’t understand the distinction you’re making.
Decision theories tell you what options you have: Pairs of actions and results.
Ethical theories tells you which options are superior.
Perhaps an example of what I mean will be helpful.
Suppose your friend is kidnapped and being held for ransom. Naive consequentialism says you should pay because you value his life more then the money. TDT says you shouldn’t pay because paying counterfactually causes him to be kidnapped.
Note how in the scenario the TDT argument sounds very deontological.
It sounds deontological, but it isn’t. It’s consequentialist. It evaluates options according to their consequences.
“Consequences” only in a counterfactual world. I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system. In particular by your definition Kant’s categorical imperative is consequentialist since it involves looking at the consequences of your actions in the hypothetical world where everyone performs them.
Yes, in that TDT-like decision/ethical theories are basically “consequentialism in which you must consider ‘acausal consequences’”.
While it may seem strange to regard ethical theories that apply Kant’s CI as “consequentialist”, it’s even stranger to call them deontological, because there is no deontic-like “rule set” they can be said to following; it’s all simple maximization, albeit with a different definition of what you count as a benefit. TDT, for example, considers not only what your action causes (in the technical sense of future results), but the implications of the decision theory you instantiate having a particular output.
(I know there are a lot of comments I need to reply to, I will get to them, be patient.)
It certainly is strange even if it is trivially possible. Any ‘consequentialist’ system can be implemented in a singleton deontological ‘rule set’. In fact, that’s the primary redeeming feature of deontology. Kind of like the best thing about Java is that you can use it to implement JRuby and bypass all of Java’s petty restrictions and short sighted rigidly enforced norms.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I’m not sure this is right).
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen. Omega’s coin didn’t come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.
I think counterfactual mugging was originally brought up in the context of problems which TDT doesn’t solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn’t pay.
There are two rather different things both going under the name counterfactuals.
One is when I think of what the world would be like if I did something that I’m not going to do.
Another is when I think of what the world would be like if something not under my control had happened differently, and how my actions affect that.
They’re almost orthogonal, so I question the utility of using the same word.
Well, I’ve been consistently using the word “conterfactual” in your second sense.
Well that might explain some of our miscommunication. I’ll go back and check.
This makes sense using the first definition, at least, according to TDT it does.
This is clearly using the first definition.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
Sorry, I meant something closer to UDT.
Alright cool. So I think that’s what’s going on—we all agree but were using different definitions of counterfactuals.
You need a proof-system to ensure that you never assume anything which you know to be false.
ADT and some related theories have achieved this. I don’t think TDT has.
What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don’t know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don’t know prior to making it.
Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don’t (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).
Well here is the issue.
Let’s say I have to decide what to do at 2′o’clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don’t, then I won’t. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.
This can mess up the logic of decision-making. There are http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/. This ensures that you can never figure out a decision before making it, which makes things simpler.
I’m not sure if this contradicts what you’ve said.
And I would agree exactly with your analysis about what’s wrong with Kant, and how that’s different from CDT and TDT.
I’m not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb’s problem or Prisoner’s dilemma, but it breaks down for Transparent Newcomb or Parfit’s Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.
Well you always know that one of your counterfactuals is true.
and Transparent Newcomb is a bit weird because one of the four possible strategies just explodes it.
There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.
Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn’t that represent a failure of Omega’s prediction?
And since you always choose an action...
It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you’d be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn’t actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don’t exist, which you can easily obtain in a variation on Transparent Newcomb.
When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn’t care about whether the agent “actually exists”, and focus on what it knows instead.
Yes. The actually impossible counterfactuals matter. All I’m saying is that the possible counterfactuals exist.
If you took such an action, wouldn’t you not exist? I request elaboration.
(You’ve probably misunderstood, I edited for clarity; will probably reply later, if that is not an actually impossible event.)
New reply: Yes, I agree.
All I’m saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.
You could possibly fix that by saying Omega isn’t perfect, but his predictions are correlated enough with your decision to make precomittment possible.
Yes. However that decision theory is wrong and dumb so we can ignore it. In particular, it never produces factuals, only counterfactuals.
You don’t need decision theories for that. You can get that far with physics and undirected imagination.
How about this:
Physics tells you pairs of actions and results.
Ethical theories tell you what results to aim for.
Decision theories combine the two.
That’s only true if you’re a human being.
That is not my understanding. The only necessary addition to physics is “any possible mechanism of varying any element in your model of the universe”. ie. You need physics and a tiny amount of closely related mathematics. That will give you a function that gives you every possible action → result pair.
I believe this only serves to strengthen your main point about the possibility of separating epistemic investigation from ethics entirely.
That’s a decision theory. For instance, if you perform causal surgery, that’s CDT. If you change all computationally identical elements, that’s TDT. And so on.
I don’t agree. A decision theory will sometimes require the production of action result pairs, as is the case with CDT, TDT and any other the decision algorithm with a consequentialist component. Yet not all production of such pairs is a ‘decision theory’. A full mathematical model of every possible state to the outcomes produced is not a decision theory in any meaningful sense. It is just a solid understanding of all of physics.
On one hand we have (physics + the ability to consider counterfactuals) and on the other we have systems for choosing specific counterfactuals to consider and compare.
If you don’t have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?
That is my point. That is what the decision theory is for!
I reassert my claim that:
Your null-decision theory doesn’t tell you what options you have. It tells you what options you would have, were you God.
This is a claim about definitions. You don’t seem to disagree with wedrifid on any question of substance in this thread.
Ok, and it is still a claim that doesn’t refute anything I have previously said. This conversation is going nowhere. exit(5)
Exit totally reasonable. I just need to point out one thing:
It wasn’t a claim in response to anything you said. It was a response to Eugene Nier.
It would have made more sense to me if it was made in reply to the relevant comment by Eugene.
This conversation is kinda pointless. Therefore, my response comes in a short version and a long version.
Short:
Sorry, that was unclear. I did not make the mistake your last post implies I made. I’m pretty sure you’ve made some mistakes, but they’re really minor. We have nothing left to discuss.
Long:
Sorry, that was unclear.
The first time I posted it, it was a response to Eugene. Then you responded, criticizing it. Then, finally, it appears like we agree, so I reassert my original claim to make sure. In that context, this response is strange:
I wasn’t trying to refute you with this claim, I was trying to refute Eugene, then you tried to refute the claim.
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
Voted down vigorously. If you can’t make the effort to make yourself understood, STFU.
According to the search tool, this was Less Wrong’s first use of “STFU” directed at another contributor. I’m pretty proud of the site for having avoided this term, and I’m pretty chagrined at you for having broken the streak.
It should be no surprise that this outburst made me far more inclined to view the grandparent in a positive light. In this case the actual content of Will’s comment seems easy to understand. Given Peterdjones aggressive use of his own incomprehention Will was rather more patient than he could have been. He could have linking to a wikipedia article on the subject so that he could get a grasp of the basics.
Rather less careful, I would say. He failed to notice the typo above until nsheperd pointed it out—the original source of the confusion. And then later he began a comment with:
I have no idea at all what “is not the case”. And I also don’t know when anyone was offered the opportunity to cleverly choose B.
Will’s description of his own limited motivation to communicate is the only portion of this thread which is crystal clear.
Yes, by working pretty hard, I was able to ignore the initial typo and to anticipate the explanation of A, B, and C. As I point out elsewhere on this thread, I have some objections to the scenario (as leaving out some details important to deontologists). Perhaps PeterDJones had similar objections. Please notice that neither of us could object to Will’s A-B-C story until it was actually spelled out. And Will resisted making the effort of spelling it out far too long.
My “STFU” was rude. But sometimes rudeness is appropriate.
It seems to me the substance of Mr Savin’s objection could have been expressed more briefly and clearly as “Deontologists would not steal under any circumstances”. (Or even the familiar “Deontologists would not lie under any circumstances, even to save a lfie”).
That does not appear to be the case. Those are examples of other things that he could have said which would provide a more convenient target for your reply. Assuming you refer to Will_Sawin, that is.