It’s hard to reconcile any western lifestyle with traditional utilitarianism though so if that’s your main concern with cryonics perhaps you need to reconsider your ethics rather than worry about cryonics.
One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don’t think every “western lifestyle” is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild. We can’t all afford to be Gandhi. The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars without causing an ethically greater amount of inconvenience or economic harm.
All that said, I’d be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.
First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild.
Universalizability arguments like this are non-utilitarian; it’s the marginal utility of your decision (modulo Newcomblike situations) that matters.
The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars
It definitely seems to me that refraining from these things is so much less valuable than making substantial effective charitable contributions (preferably to existential risk reduction, of course, but still true of e.g. the best aid organizations), probably avoiding factory-farmed meat, and probably other things as well.
First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild.
Interesting. I’m not certain, but I think this isn’t quite right. In theory, the westerners would just be sending their money to desperately poor people, so aggregate demand wouldn’t necessarily decline, it would move around. Consumption really doesn’t create wealth. Of course rational utilitarian westerners would recognize the transfer costs and also wouldn’t completely neglect their own happiness.
All that said, I’d be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.
Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.
If you have no regard for yourself then pursue pure altruism. Leave yourself just enough that you can keep producing more wealth for others. Study Mother Teresa.
If you have no regard for others, then a policy of selfishness is for you. Carefully plan to maximize your total future well-being. Leave just enough for others that you aren’t outed as a sociopath. Study Anton LaVey.
If you have equal regard for the happiness of yourself and others, pursue utilitarianism. Study Rawls or John Stuart Mill.
Most people aren’t really any of the above. I, like most people, am somewhere between LaVey and Mill. Of course defending utilitarianism sounds better than justifying egoism, so we get more of that.
Yeah, I heard about this on Bullshit with Penn & Teller. I considered choosing someone else, but Mother Teresa is still the easiest symbol of pure altruism. (That same episode included a smackdown on the Dalai Lama and Ghandi, so my options look pretty weak.
Yes, ‘pure altruism’ is a pretty weak position, and you won’t find many proponents of it. Altruism as an ethical position doesn’t make any sense; you keep pushing all of your utils on other people, but if you consider a 2-person system doing this, nobody actually gets to keep any of the utils.
Agreed, but under certain conditions relating to how much causal influence one has on others vs. oneself, utilitarianism and pure altruism lead to the same prescriptions. (I would argue these conditions are usually satisfied in practice.)
Gandhi? Really? My impression is that the “smackdown” on Gandhi is vastly, vastly less forceful than the smackdown on Teresa. Though I haven’t watched that particular episode, I’ve read other critiques that seemed to be reaching as far as possible, and they didn’t reach very far.
In theory, the westerners would just be sending their money to desperately poor people.
I’m not an economist, and but I think you could model that as a kind of demand. And I don’t think I stipulated to there being a transfer of wealth.
Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.
For me, the interesting question is how one goes about choosing “terminal values.” I refuse to believe that it is arbitrary or that all paths are of equal validity. I will contend without hesitation that John Stuart Mill was a better mind, a better rationalist, and a better man than Anton LaVey. My own thinking on these lines leads me to the conclusion of an “objective” morality, that is to say one with expressible boundaries and one that can be applied consistently to different agents. How do you choose your terminal values?
Short answer? We don’t. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit “I value all human life equally, except I value myself and my children somewhat more.”)
But we are not really utilitarians. Our mental architecture doesn’t allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
It sounds bad for a rationalist to admit “I value all human life equally, except I value myself and my children somewhat more.”
Only because that’s logically contradictory. If you drop the equally part it sounds fine to me: “I value all human life, but I value some human lives more than others.”.
Utilitarianism is clearly not a good descriptive ethical theory (it does a poor job of describing or predicting how people actually behave) and I see no good reason to believe it is a good normative theory (a prescription for how people should behave).
‘Gut feeling’ is pretty much how I am evaluating it (and is a normative theory in a sense—what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don’t see why I should accept what utilitarianism says is good over what I believe is good.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure.
What do you mean by an “ethical theory” here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?
Since it’s clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don’t name specific individuals that they are about.
Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don’t think it’s proponents would generally disagree on that).
A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn’t going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren’t, I don’t see any realistic way of getting there from here, and I wouldn’t want to anyway (since it would conflict with my actual values).
A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else’s ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.
But we are not really utilitarians. Our mental architecture doesn’t allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
Shouldn’t this be evidence that utilitarianism isn’t close to the facts about ethics?
Shouldn’t this be evidence that utilitarianism isn’t close to the facts about ethics?
The rest of our brains are wired to give close-enough approximations quickly, not to reliably produce correct answers (cf. cognitive biases). It’s not a given that any coherent defition of ethics, even a correct one, should agree with our intuitive responses in all cases.
Short answer? We don’t. Not really. Human beings have an evolved moral instinct.
A longer answer looks at what ‘choice’ means a little more closely and wonders how tracable causality implies lack of choice in this instance and yet still manages to have any meaning whatsoever.
I’m interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more ‘objectively’ moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all ‘defecting’ (the prisoner’s dilemma is too simple to be a really representative model but is a useful analogy).
The extent and nature of that minimal framework is an open question and is what I’m interested in establishing.
You might be interested in the literature in normative ethics on what is called the overdemandingness problem. In particular, check out Liam Murphy on what he calls the cooperative principle. It takes utilitarianism but establishes a limit set on the amount individuals are required to sacrifice… Murphy’s theory sets the limit as that which the individual would be required to sacrifice under full cooperation. So rather than sacrificing all your material wellbeing until giving more would reduce your wellbeing to beneath that of the people you’re trying to help you instead need only sacrifice that which would be required of you if the entire western world and non-western elites were doing their part as well.
I’m interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more ‘objectively’ moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all ‘defecting’ (the prisoner’s dilemma is too simple to be a really representative model but is a useful analogy).
You’re talking about ‘politics’, not ‘ethics’. Politics is about working together, ethics is about what one has most reason to do or want. What the political rules should say and what I should do are not necessarily going to give me the same answers.
I disagree with your definitions. You seem to be talking about normative ethics—what you ‘should’ do. I’m more interested in topics that might fall under meta-ethics, descriptive ethics and applied ethics. There is certainly cross-over with politics but there is a lot of other baggage that comes with the word politics that means it’s not a word I find useful to talk about the kind of questions I’m interested in here.
Think coordination. Two agents may coordinate their actions, if doing so will benefit both. In this sense, it’s cooperation. It doesn’t include fighting over preferences, fighting over preferences will just consist in them acting on environment without coordination. But this should never be possible, since the set of coordinated plans is strictly greater than a set of uncoordinated plans, and as a result it should always contain a solution that is a Pareto improvement on the best uncoordinated one, that is at least as good for both players as the best uncoordinated solution. Thus, it’s always useful to coordinate your actions will all other agents (and at this point, you also need to dole the benefit of coordination to each side fairly, think Ultimatum game).
Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I’m firmly in Anton LaVey’s corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person’s judgement: How are we supposed to choose values?
It seems to me that most problems in politics and other attempts to establish cooperative frameworks stem not from confusion over terminal values but from differing priorities placed on conflicting values and most of all on flawed reasoning about the best way to structure a system to best deliver results that satisfy our common preferences.
This fact is often obscured by the tendency for political disputes to impute ‘bad’ values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.
On the whole, we’re agreed, but I still don’t know how I’m supposed to choose values.
This fact is often obscured by the tendency for political disputes to impute ‘bad’ values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.
I think this tactic works best when you’re dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you’re playing to your base, not trying to grab the center.
I don’t think objectivity is an important feature of ethics. I’m not sure there’s such a thing as a rationalist ethics. Being rational is about optimally achieving your goals. Choosing those goals is not something that rationality can help much with—the best it can do is try to identify where goals are not internally consistent.
I gave a rough exposition of what I see as a possible rationalist ethics in this comment but it’s incomplete. If I ever develop a better explanation I might make a top level post.
Choosing those goals is not something that rationality can help much with—the best it can do is try to identify where goals are not internally consistent.
It often turns out that generating consistent decision rules can be harder than one might expect. Hence the plethora of “impossibility theorems” in social choice theory. (Many of these, like Arrow’s arise when people try to rule out interpersonal utility comparisons, but there are a number that bite even when such comparisons are allowed, e.g. in population ethics.)
Yeah, expecting to achieve consistency is probably too much too ask but recognizing conflicts at least allows you to make a conscious choice about priorities.
Choosing those goals is not something that rationality can help much with—the best it can do is try to identify where goals are not internally consistent.
I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn’t we try to pin down and either discard or accept some version of “purpose,” as a sort of first instrumental rationality?
I mention objectivity because I don’t think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There’s little to discuss if you don’t, because “everything is permitted.” That said, I think ethics has to understand each person’s competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more “important” thing isn’t agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I’m in substantial agreement with:
Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.
And I would enjoy thoroughly a post on this topic.
I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn’t we try to pin down and either discard or accept some version of “purpose,” as a sort of first instrumental rationality?
Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I’m primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals.
There’s little to discuss if you don’t, because “everything is permitted.”
To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is ‘wrong’ in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the ‘freedom’ to murder at will. That equilibrium can break down and I’m interested in ways to robustly maintain the ‘good’ equilibrium rather than the ‘bad’ equilibrium that has existed at certain times and in certain places in history. I don’t however feel the need to ‘prove’ that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle—I simply take it as a given.
Why do you think it needs to be confronted?
…
I don’t however feel the need to ‘prove’ that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle—I simply take it as a given.
I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?
Most problems in the world seem to arise from conflicting goals, either internally or between different people. I’m primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts.
Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I’m saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.
What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?
I’m very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don’t however see any reason to expect to find or to want to find a more fundamental basis for those preferences.
Our goals are what they are because they were the kind of goals that made our ancestors successful. They’re the kind of goals that lead to people like us with just those kind of goals… There doesn’t need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities.
Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want.
I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.
To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to “objective” morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?
Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences.
We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also.
I don’t really recognize a distinction here. The explanation explains why preferences are their own justification in my view.
Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.
I think I at least partially agree—sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities.
The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized.
This looks like the utilitarian position and is where I would disagree to some extent. I don’t believe it’s necessary or desirable for individuals to prefer ‘aggregated’ utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize ‘aggregate’ utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.
One of the beauties of utilitarianism is that its ethics can adapt to different circumstances without losing objectivity. I don’t think every “western lifestyle” is necessarily reprobate under utilitarianism. First off, if westerners abandoned their western lifestyles, humanity would be sunk: next to the collapse of aggregate demand that would ensue, our present economic problems would look very mild. We can’t all afford to be Gandhi. The rub is trying to avoid being a part of really harmful, unsustainable things like commercial ocean fishing or low fuel-efficiency cars without causing an ethically greater amount of inconvenience or economic harm.
All that said, I’d be really interested in reading a post by you on rationalist but non-utilitarian ethics. It seems to me that support for utilitarianism on this site is almost as strong as support for cryonics.
Universalizability arguments like this are non-utilitarian; it’s the marginal utility of your decision (modulo Newcomblike situations) that matters.
It definitely seems to me that refraining from these things is so much less valuable than making substantial effective charitable contributions (preferably to existential risk reduction, of course, but still true of e.g. the best aid organizations), probably avoiding factory-farmed meat, and probably other things as well.
Interesting. I’m not certain, but I think this isn’t quite right. In theory, the westerners would just be sending their money to desperately poor people, so aggregate demand wouldn’t necessarily decline, it would move around. Consumption really doesn’t create wealth. Of course rational utilitarian westerners would recognize the transfer costs and also wouldn’t completely neglect their own happiness.
Unless you believe in objective morality, then a policy of utilitarianism, pure selfishness, or pure altruism all may be instrumentally rational, depending on your terminal values.
If you have no regard for yourself then pursue pure altruism. Leave yourself just enough that you can keep producing more wealth for others. Study Mother Teresa.
If you have no regard for others, then a policy of selfishness is for you. Carefully plan to maximize your total future well-being. Leave just enough for others that you aren’t outed as a sociopath. Study Anton LaVey.
If you have equal regard for the happiness of yourself and others, pursue utilitarianism. Study Rawls or John Stuart Mill.
Most people aren’t really any of the above. I, like most people, am somewhere between LaVey and Mill. Of course defending utilitarianism sounds better than justifying egoism, so we get more of that.
Hitchens: The pope beatifies Mother Teresa, a fanatic, a fundamentalist, and a fraud.
Yeah, I heard about this on Bullshit with Penn & Teller. I considered choosing someone else, but Mother Teresa is still the easiest symbol of pure altruism. (That same episode included a smackdown on the Dalai Lama and Ghandi, so my options look pretty weak.
Yes, ‘pure altruism’ is a pretty weak position, and you won’t find many proponents of it. Altruism as an ethical position doesn’t make any sense; you keep pushing all of your utils on other people, but if you consider a 2-person system doing this, nobody actually gets to keep any of the utils.
Agreed, but under certain conditions relating to how much causal influence one has on others vs. oneself, utilitarianism and pure altruism lead to the same prescriptions. (I would argue these conditions are usually satisfied in practice.)
Gandhi? Really? My impression is that the “smackdown” on Gandhi is vastly, vastly less forceful than the smackdown on Teresa. Though I haven’t watched that particular episode, I’ve read other critiques that seemed to be reaching as far as possible, and they didn’t reach very far.
It mostly had to do with Gandhi being racist.
Unsure if it’s worth reading, but here is a long critical article.
Perhaps you should reconsider the value of ‘pure altruism’.
I’m not an economist, and but I think you could model that as a kind of demand. And I don’t think I stipulated to there being a transfer of wealth.
For me, the interesting question is how one goes about choosing “terminal values.” I refuse to believe that it is arbitrary or that all paths are of equal validity. I will contend without hesitation that John Stuart Mill was a better mind, a better rationalist, and a better man than Anton LaVey. My own thinking on these lines leads me to the conclusion of an “objective” morality, that is to say one with expressible boundaries and one that can be applied consistently to different agents. How do you choose your terminal values?
Yes that was my point. I go on to say that aggregate demand would not decrease.
I recommend Eliezer’s essay regarding the objective morality of sorting pebbles into correct heaps.
http://www.overcomingbias.com/2008/08/pebblesorting-p.html
Short answer? We don’t. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit “I value all human life equally, except I value myself and my children somewhat more.”)
But we are not really utilitarians. Our mental architecture doesn’t allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
Only because that’s logically contradictory. If you drop the equally part it sounds fine to me: “I value all human life, but I value some human lives more than others.”.
Utilitarianism is clearly not a good descriptive ethical theory (it does a poor job of describing or predicting how people actually behave) and I see no good reason to believe it is a good normative theory (a prescription for how people should behave).
How are you going to evaluate a normative theory, except by comparison to another normative theory, or by gut feeling?
‘Gut feeling’ is pretty much how I am evaluating it (and is a normative theory in a sense—what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don’t see why I should accept what utilitarianism says is good over what I believe is good.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
What do you mean by an “ethical theory” here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?
Since it’s clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don’t name specific individuals that they are about.
Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don’t think it’s proponents would generally disagree on that).
A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn’t going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren’t, I don’t see any realistic way of getting there from here, and I wouldn’t want to anyway (since it would conflict with my actual values).
A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else’s ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.
Shouldn’t this be evidence that utilitarianism isn’t close to the facts about ethics?
Only if you think we’re wired to be ethical.
I believe that was part of what knb was saying.
The rest of our brains are wired to give close-enough approximations quickly, not to reliably produce correct answers (cf. cognitive biases). It’s not a given that any coherent defition of ethics, even a correct one, should agree with our intuitive responses in all cases.
A longer answer looks at what ‘choice’ means a little more closely and wonders how tracable causality implies lack of choice in this instance and yet still manages to have any meaning whatsoever.
I’m interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more ‘objectively’ moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all ‘defecting’ (the prisoner’s dilemma is too simple to be a really representative model but is a useful analogy).
The extent and nature of that minimal framework is an open question and is what I’m interested in establishing.
You might be interested in the literature in normative ethics on what is called the overdemandingness problem. In particular, check out Liam Murphy on what he calls the cooperative principle. It takes utilitarianism but establishes a limit set on the amount individuals are required to sacrifice… Murphy’s theory sets the limit as that which the individual would be required to sacrifice under full cooperation. So rather than sacrificing all your material wellbeing until giving more would reduce your wellbeing to beneath that of the people you’re trying to help you instead need only sacrifice that which would be required of you if the entire western world and non-western elites were doing their part as well.
You’re talking about ‘politics’, not ‘ethics’. Politics is about working together, ethics is about what one has most reason to do or want. What the political rules should say and what I should do are not necessarily going to give me the same answers.
I disagree with your definitions. You seem to be talking about normative ethics—what you ‘should’ do. I’m more interested in topics that might fall under meta-ethics, descriptive ethics and applied ethics. There is certainly cross-over with politics but there is a lot of other baggage that comes with the word politics that means it’s not a word I find useful to talk about the kind of questions I’m interested in here.
Think coordination. Two agents may coordinate their actions, if doing so will benefit both. In this sense, it’s cooperation. It doesn’t include fighting over preferences, fighting over preferences will just consist in them acting on environment without coordination. But this should never be possible, since the set of coordinated plans is strictly greater than a set of uncoordinated plans, and as a result it should always contain a solution that is a Pareto improvement on the best uncoordinated one, that is at least as good for both players as the best uncoordinated solution. Thus, it’s always useful to coordinate your actions will all other agents (and at this point, you also need to dole the benefit of coordination to each side fairly, think Ultimatum game).
Peaceful coexistence is not something I object to. Neither does anything oblige agents to perfectly align their values, each is free to choose. I strongly endorse people with wildly different values cooperating in areas of common interest: I’m firmly in Anton LaVey’s corner on civil liberties, for instance. It should be recognized, though, that some are clearly more wrong than others because some people get poor information and others reason poorly through akrasia or inability. Anton LaVey was not trying hard enough. I think the question is worth asking, because it is the basis of building the minimal framework of rules from each person’s judgement: How are we supposed to choose values?
It seems to me that most problems in politics and other attempts to establish cooperative frameworks stem not from confusion over terminal values but from differing priorities placed on conflicting values and most of all on flawed reasoning about the best way to structure a system to best deliver results that satisfy our common preferences.
This fact is often obscured by the tendency for political disputes to impute ‘bad’ values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.
On the whole, we’re agreed, but I still don’t know how I’m supposed to choose values.
I think this tactic works best when you’re dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you’re playing to your base, not trying to grab the center.
I don’t think objectivity is an important feature of ethics. I’m not sure there’s such a thing as a rationalist ethics. Being rational is about optimally achieving your goals. Choosing those goals is not something that rationality can help much with—the best it can do is try to identify where goals are not internally consistent.
I gave a rough exposition of what I see as a possible rationalist ethics in this comment but it’s incomplete. If I ever develop a better explanation I might make a top level post.
It often turns out that generating consistent decision rules can be harder than one might expect. Hence the plethora of “impossibility theorems” in social choice theory. (Many of these, like Arrow’s arise when people try to rule out interpersonal utility comparisons, but there are a number that bite even when such comparisons are allowed, e.g. in population ethics.)
Yeah, expecting to achieve consistency is probably too much too ask but recognizing conflicts at least allows you to make a conscious choice about priorities.
Ok, here is what I don’t agree with:
I think rationality absolutely must confront the question of purpose, and head-on. How else are we to confront it? Shouldn’t we try to pin down and either discard or accept some version of “purpose,” as a sort of first instrumental rationality?
I mention objectivity because I don’t think you can have any useful ethics without some static measure of comparability, some goal, however loose, that each person can pursue. There’s little to discuss if you don’t, because “everything is permitted.” That said, I think ethics has to understand each person’s competence to self-govern. Your utility function is important to everyone, but nobody knows how to maximize your utility function better than you. Usually. Ethics also has to bend to reality, so the more “important” thing isn’t agreement on theoretical questions, but cooperation towards mutually-agreed goals. So I’m in substantial agreement with:
And I would enjoy thoroughly a post on this topic.
Why do you think it needs to be confronted? I know there are many things that I want (though some of them may be mutually exclusive when closely examined) and that there are many similarities between the things that I want and the things that other humans want. Sometimes we can cooperate and both benefit, in other cases our wants conflict. Most problems in the world seem to arise from conflicting goals, either internally or between different people. I’m primarily interested in rationality as a route to better meeting my own goals and to finding better resolutions to conflicts. I have no desire to change my goals except to the extent that they are mutually exclusive and there is a clear path to a more self consistent set of goals.
To the extent that we share a common evolutionary history our goals as humans overlap to a sufficient extent that cooperation is beneficial more often than not. Even where goals conflict, there is mutual benefit to agreeing rules for conflict resolution such that not everything is permitted. It is in our collective interest not to permit murder, not because murder is ‘wrong’ in some abstract sense but simply because most of us can usually agree that we prefer to live in a society where murder is forbidden, even at the cost of giving up the ‘freedom’ to murder at will. That equilibrium can break down and I’m interested in ways to robustly maintain the ‘good’ equilibrium rather than the ‘bad’ equilibrium that has existed at certain times and in certain places in history. I don’t however feel the need to ‘prove’ that my underlying preference for preserving the lives of myself and my family and friends (and to a lesser extent humans in general) is a fundamental principle—I simply take it as a given.
I think it needs to be confronted because simply taking things as given leads to sloppy moral reasoning. Your preference for self-preservation seems to be an impulse like any other, no more profound than a preference for chocolate over vanilla. What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?
Again, this is the ultimately important part. Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want. Further, we discipline ourselves so that our goals are clear and consistent. All I’m saying is that you may want to look into the basis of your own goals and systematize them to enhance clarity.
I’m very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don’t however see any reason to expect to find or to want to find a more fundamental basis for those preferences.
Our goals are what they are because they were the kind of goals that made our ancestors successful. They’re the kind of goals that lead to people like us with just those kind of goals… There doesn’t need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities.
Hopefully we can all agree on that.
I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.
To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to “objective” morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?
Certainly close enough to hope to agree on a set of rules, if not completely on personal values/preferences.
I don’t really recognize a distinction here. The explanation explains why preferences are their own justification in my view.
I think I at least partially agree—sometimes we should override our immediate moral intuitions in light of a deeper understanding of how following them would lead to worse long term consequences. This is what I mean when I talk about recognizing contradictions within our value system and consciously choosing priorities.
This looks like the utilitarian position and is where I would disagree to some extent. I don’t believe it’s necessary or desirable for individuals to prefer ‘aggregated’ utility. If forced to choose I will prefer outcomes that maximize utility for myself and my family and friends over those that maximize ‘aggregate’ utility. I believe that is perfectly moral and is a natural part of our value system. I am however happy to accept constraints that allow me to coexist peacefully with others who prefer different outcomes. Morality should be about how to set up a system that allows us to cooperate when we have an incentive to defect.