Where do you stand? (If you can explain without a full page essay). I’m something of a utilitarian skeptic as well… I’d like to see if the rest of us have overlapping views.
My own ethical position is easy to state: confused. ;)
There should be a post, intended for people about to embark on the topic of ethics and metaethics, providing guidance on even figuring out what your current intuitions are, and where they position you on the map of the standard debates.
My (post-school) readings on the topic have included Singer’s Practical Ethics and Rawls’ Theory of Justice. I was definitely more impressed and influenced by the latter. If pressed, I would call myself a contractarian. (Being French, I had early encounters with Rousseau, but I don’t remember those with any precision.)
I’m skeptical of the way “utility function” is often used, as a lily-gilding equivalent of “what I want”. I’m skeptical that interpersonal comparisons of utility have any value, such that “my utility function”, assuming there is such a thing, can be meaningfully aggregated with “your utility function”. Thus I’m skeptical that utility provides a useful guide to moral decisions.
I’ll try to summarize but my position isn’t fully worked out so this is just a rough outline.
I think it’s important to distinguish the descriptive and prescriptive/normative elements of any moral or ethical theory. That distinction sometimes seems to get lost in discussions here.
Descriptively, I think that what human morality actually is is a system of biologically and culturally evolved rules, principles and dispositions that have tended to lead to reproductive success. The details of what those rules are is largely an empirical question but most people have some reasonable insight into them based on being human and living in human society.
There is a naive view of evolution that fails to understand how behaviour that we would generally call altruistic or that is not immediately obviously self-interested can be explained in such a framework. Hopefully most people here don’t explicitly hold such a view but it seems that remnants of it still infect the moral thinking of people who ‘know better’. I think if you look at human society from a game-theoretic / evolutionarily stable strategy perspective it becomes fairly clear that most of what we call altruism or non-self-interested behaviour makes good sense for self-interested agents. I don’t believe there is any mystery about such behaviour that needs to be explained by invoking some view of morality or ethics that is not ultimately rooted in evolutionary success.
Prescriptively, I think that people should behave so as to maximize their own self interest, where self interest is understood in the broad sense that can account for altruism, self-sacrifice and other elements that a naive interpretation of self interest would miss. In this sense I am something of an ethical egoist. That’s a simple basis for morality but it does not necessarily produce a simple answer to any particular moral question.
On top of this basic principle of morality, I have an ethical framework that I believe would tend to produce good results for everyone if we could all agree to adopt it. This is partly an empirical question and I am open to revising it in light of new evidence. I’m basically a libertarian and agree with most of the reasoning of natural rights libertarians but largely because I think the non-agression principle is the simplest self-consistent basis for a workable Nash equilibrium {1} for organizing human society and not because I think the natural rights are some kind of inviolable moral fact. I do have a personal somewhat-deontological moral preference for freedom that arguably goes beyond what I can justify by an argument based on game theory. I have seen some research that indicates that people’s political and ethical beliefs correlate with personality types and that placing high values on personal freedom is connected with ‘openness to experience’ which may help explain my own personal ethical leanings.
To the extent that I view ethics as an empirical issue I could also be called a consequentialist libertarian. I think the evidence shows that societies that follow more libertarian principles tend to be more prosperous and I think prosperity is a very good thing (it seems odd to have to qualify that but some people appear not to).
My biggest issues with utilitarianism as I generally understand it are the emphasis on some kind of global utility function that defines what is moral (incompatible with my belief that people should act in their own best interests and I believe unrealistic and undesirable in practice) and the general lack of recognition of the computational intractability of the problem. Together these objections mean I think that utilitarianism is neither desirable nor possible, which is a somewhat damning indictment I suppose...
{1} I use Nash equilibrium in a non-technical, allegorical sense—proving a true Nash equilibrium for human society is almost certainly a computationally intractable problem.
We’re not all utilitarians. It does seem to be a bafflingly popular view here but there are dissenting voices.
Where do you stand? (If you can explain without a full page essay). I’m something of a utilitarian skeptic as well… I’d like to see if the rest of us have overlapping views.
My own ethical position is easy to state: confused. ;)
There should be a post, intended for people about to embark on the topic of ethics and metaethics, providing guidance on even figuring out what your current intuitions are, and where they position you on the map of the standard debates.
My (post-school) readings on the topic have included Singer’s Practical Ethics and Rawls’ Theory of Justice. I was definitely more impressed and influenced by the latter. If pressed, I would call myself a contractarian. (Being French, I had early encounters with Rousseau, but I don’t remember those with any precision.)
I’m skeptical of the way “utility function” is often used, as a lily-gilding equivalent of “what I want”. I’m skeptical that interpersonal comparisons of utility have any value, such that “my utility function”, assuming there is such a thing, can be meaningfully aggregated with “your utility function”. Thus I’m skeptical that utility provides a useful guide to moral decisions.
I’ll try to summarize but my position isn’t fully worked out so this is just a rough outline.
I think it’s important to distinguish the descriptive and prescriptive/normative elements of any moral or ethical theory. That distinction sometimes seems to get lost in discussions here.
Descriptively, I think that what human morality actually is is a system of biologically and culturally evolved rules, principles and dispositions that have tended to lead to reproductive success. The details of what those rules are is largely an empirical question but most people have some reasonable insight into them based on being human and living in human society.
There is a naive view of evolution that fails to understand how behaviour that we would generally call altruistic or that is not immediately obviously self-interested can be explained in such a framework. Hopefully most people here don’t explicitly hold such a view but it seems that remnants of it still infect the moral thinking of people who ‘know better’. I think if you look at human society from a game-theoretic / evolutionarily stable strategy perspective it becomes fairly clear that most of what we call altruism or non-self-interested behaviour makes good sense for self-interested agents. I don’t believe there is any mystery about such behaviour that needs to be explained by invoking some view of morality or ethics that is not ultimately rooted in evolutionary success.
Prescriptively, I think that people should behave so as to maximize their own self interest, where self interest is understood in the broad sense that can account for altruism, self-sacrifice and other elements that a naive interpretation of self interest would miss. In this sense I am something of an ethical egoist. That’s a simple basis for morality but it does not necessarily produce a simple answer to any particular moral question.
On top of this basic principle of morality, I have an ethical framework that I believe would tend to produce good results for everyone if we could all agree to adopt it. This is partly an empirical question and I am open to revising it in light of new evidence. I’m basically a libertarian and agree with most of the reasoning of natural rights libertarians but largely because I think the non-agression principle is the simplest self-consistent basis for a workable Nash equilibrium {1} for organizing human society and not because I think the natural rights are some kind of inviolable moral fact. I do have a personal somewhat-deontological moral preference for freedom that arguably goes beyond what I can justify by an argument based on game theory. I have seen some research that indicates that people’s political and ethical beliefs correlate with personality types and that placing high values on personal freedom is connected with ‘openness to experience’ which may help explain my own personal ethical leanings.
To the extent that I view ethics as an empirical issue I could also be called a consequentialist libertarian. I think the evidence shows that societies that follow more libertarian principles tend to be more prosperous and I think prosperity is a very good thing (it seems odd to have to qualify that but some people appear not to).
My biggest issues with utilitarianism as I generally understand it are the emphasis on some kind of global utility function that defines what is moral (incompatible with my belief that people should act in their own best interests and I believe unrealistic and undesirable in practice) and the general lack of recognition of the computational intractability of the problem. Together these objections mean I think that utilitarianism is neither desirable nor possible, which is a somewhat damning indictment I suppose...
{1} I use Nash equilibrium in a non-technical, allegorical sense—proving a true Nash equilibrium for human society is almost certainly a computationally intractable problem.