Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.
While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn’t seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.
In order to answer that question, it seems like we might need a better theory of when they’re especially likely to be poor guides than we currently have. It’s not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*
As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.
Finally, it’s worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).
* And even if information content did increase with realism, realism doesn’t seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one’s favourite theory as much as it is a function of the postulated scenario.
I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?
Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.
I don’t consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of ‘good’) and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.
There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though—there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.
I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational ‘good’, rather than the intuitions that produce the conflict.
I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man’s not-sacrificing is morally questionable.
In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I’m sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you’ll understand if I say this appears a little contradictory at the moment.
P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I’ll actually follow through on this, but I’m seriously tempted to automatically vote down anyone who uses it from now on, just because it’s becoming so overused around here.
P.P.S. On re-reading, the word “simple” in the sentence you quoted was utterly redundant. It shouldn’t have been there. Apologies for any confusion that may have caused.
I made a few claims in my original post: i) I don’t think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man’s moral failing to be forgivable. I don’t have hard evidence for any claim except i), they are just my impressions.
The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.
Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that’s a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.
If you have a better alternative to the baseball/differential equations example I’d happily use it. It’s the first example that sprang to mind, probably due to it’s being commonly used here.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
If I say “X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I’m selfish/weak” (which is what I and others have said elsewhere) then (a) there’s no conflict to resolve; and (b) it doesn’t make much sense to claim that my judgment that “X is wrong” is a cheap signal of altruism. In fact I’ve just signaled the opposite.
Now, if people changing their moral judgments from “X-ing is wrong” to “X-ing is permissible”, then I agree that there’s a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.
FWIW, If you want a self-interested explanation of the stated judgment that “X-ing is wrong”, I wonder whether moral censure (i.e. trying to convince others that they shouldn’t X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.
Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn’t mean it can’t be improved, just that its very existence implies that it works reasonably well.
The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we’d expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It’s like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I’d argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion—an idealized morality that is not achievable with actual people is not very interesting to me.
Ultimately I don’t take a utilitarian view of morality—that what is ‘good’ is what maximizes utility across all agents. I take an ‘enlightened self interest’ view—that what is ‘good’ is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.
There are heaps and heaps of consequentialist/utilitarian views that don’t maximize utility uncritically across everybody. It sounds like you prefer something in the neighborhood of agent-favoring morality, but ethical egoism is a consequentialist view too.
Based on discussions I’ve had here I get the impression that most people consider ‘utilitarianism’, unqualified, to imply equal weighting for all people in the utility function to be maximized. Even where equal weighting is not implied (the existence of the ‘utility monster’ as a problem for some variants acknowledges that weights are not necessarily equal) it seems that utilitarianism has a unique weighting for all agents and that what is ‘right’ is what maximizes some globally agreed upon utility function. I don’t accept either premise so I’m fairly sure I’m not a utilitarian.
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies. I haven’t found a description of an ethical theory that I feel comfortable identifying my views with so far, though ethical egoism seems somewhat close from the little I’ve read on Wikipedia (it’s what I ended up putting down on Yvain’s survey).
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies.
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.
Fair enough. It was mainly the appearance of motivated stopping that I was concerned with.
While I share some general concerns about the reliability of thought experiments, in the absence of a better alternative, the question doesn’t seem to be whether we use them or not, but how we can make best use of them despite their potential flaws.
In order to answer that question, it seems like we might need a better theory of when they’re especially likely to be poor guides than we currently have. It’s not obvious, for example, that their information content increases monotonically in realism. Many real-world issues seem too complicated for simple intuitions to be much of a guide to anything.*
As well as trying to frame scenarios in ways that reduce noise/bias in our intuitions, we can also try to correct for the effect of known biases. A good example would be adjusting for scope insensitivty. But we need to be careful about coming up with just-so stories to explain away intuitions we disagree with. E.g. you claim that the altruist intuition is merely a low cost-signal; I claim that the converse is merely self-serving rationalization. Both of these seem like potentially good examples of confirmation bias at work.
Finally, it’s worth bearing in mind that, to the extent that our main concern is that thought experiments provide noisy (rather than biased) data, this could suggest that the solution is more thought experiments rather than fewer (for standard statistical reasons).
* And even if information content did increase with realism, realism doesn’t seem to correspond in any simple way to convenience (as your comments seem to imply). Not least because convenience is a function of one’s favourite theory as much as it is a function of the postulated scenario.
I would be interested in hearing more on this subject. It sounds similar to Hardend Problems Make Brittle Models. Do you have any good jumping points for further reading?
I don’t, but I’d second the call for any good suggestions.
I don’t consider moral intuitions simple at all though. In fact, in the case of morality I have a suspicion that trying to apply principles derived from simple thought experiments to making moral decisions is likely to produce results roughly as good as trying to catch a baseball by doing differential equations with a pencil. It seems fairly clear to me that our moral intuitions have been carefully honed by evolution to be effective at achieving a purpose (which has nothing much to do with an abstract concept of ‘good’) and when a simplified line of reasoning leads to a conflict with moral intuitions I tend to trust the intuitions more than the reasoning.
There seem to be cases where moral intuitions are maladapted to the modern world and result in decisions that appear sub-optimal, either because they directly conflict with other moral intuitions or because they tend to lead to outcomes that are worse for all parties. I place the evidentiary bar quite high in these cases though—there needs to be a compelling case made for why the moral intuition is to be considered suspect. A thought experiment is unlikely to reach that bar. Carefully collected data and a supporting theory are in with a chance.
I am also wary of bias in what people suggest should be thrown out when such conflicts arise. If our intuitions seem to conflict with a simple conception of altruism, maybe what we need to throw out is the simple conception of altruism as a foundational ‘good’, rather than the intuitions that produce the conflict.
I confess to being somewhat confused now. Your previous comment questioned the relevance of moral intuitions generated by particular types of thought experiments, and argued (on what seem to me pretty thin grounds) against accepting what seemed to be the standard intuition that the 11th man’s not-sacrificing is morally questionable.
In contrast, this comment extols the virtues of moral intuitions, and argues that we need a compelling case to abandon them. I’m sure you have a good explanation for the different standards you seem to be applying to intuitive judgments in each case, but I hope you’ll understand if I say this appears a little contradictory at the moment.
P.S. Is anyone else sick to death of the baseball/differential equations example? I doubt I’ll actually follow through on this, but I’m seriously tempted to automatically vote down anyone who uses it from now on, just because it’s becoming so overused around here.
P.P.S. On re-reading, the word “simple” in the sentence you quoted was utterly redundant. It shouldn’t have been there. Apologies for any confusion that may have caused.
I made a few claims in my original post: i) I don’t think the 11th man is acting immorally by saving himself over the 10; ii) most people would think he is acting immorally; iii) most people would choose to save themselves if actually confronted with this situation; iv) most people would consider the 11th man’s moral failing to be forgivable. I don’t have hard evidence for any claim except i), they are just my impressions.
The contradiction I see here is mostly in the conflict between what most people say they would do and what they would actually do. One possible resolution of the conflict is to say that self-sacrifice is the morally right thing to do but that most people are morally weak. Another possible resolution is to say that self-sacrifice is not a morally superior choice and therefore most people would actually not be acting immorally in this situation by not self-sacrificing. I lean towards the latter and would attempt to explain the conflict by saying that people see more value in signaling altruism cheaply (by saying they would self-sacrifice in an imaginary scenario) than in actually being altruistic in a real scenario. There is a genuine conflict here but I would resolve it by saying people have a tendency to over-value altruism in hypothetical moral scenarios relative to in actual moral decisions. I actually believe that this tendency is harmful and leads to worse outcomes but a full explanation of my thinking there would be a much longer post than I have time for right now.
Conflicts can exist between different moral intuitions when faced with an actual moral decision and resolving them is not simple but that’s a different case than conflicts between intuitions of what imaginary others should do in imagined scenarios and intuitions about what one should do oneself in a real scenario.
If you have a better alternative to the baseball/differential equations example I’d happily use it. It’s the first example that sprang to mind, probably due to it’s being commonly used here.
Your argument seems to me to conflate judgments that “X-ing is wrong” with predictions that one would not X if faced with a particular choice in real life.
If I say “X-ing is wrong, but actually, if ever faced with this situation I would quite possibly end up X-ing because I’m selfish/weak” (which is what I and others have said elsewhere) then (a) there’s no conflict to resolve; and (b) it doesn’t make much sense to claim that my judgment that “X is wrong” is a cheap signal of altruism. In fact I’ve just signaled the opposite.
Now, if people changing their moral judgments from “X-ing is wrong” to “X-ing is permissible”, then I agree that there’s a conflict to resolve. But it seems that cognitive dissonance provides an explanation of this behavior at least as good as cheap talk.
FWIW, If you want a self-interested explanation of the stated judgment that “X-ing is wrong”, I wonder whether moral censure (i.e. trying to convince others that they shouldn’t X, even though you will ultimately X) would be a better one than signaling. Not necessarily mutually exclusive I guess.
Judgements that a choice is morally wrong are clearly not the same thing as predictions about whether people would make that choice. The way I view morality though a wide gulf between the two is indicative of a problem to be resolved. I see the purpose of morality as providing a framework for solving something analogous to an iterated prisoners dilemma. If we can all agree to impose certain restrictions on our own actions because we all expect to do better if everyone sticks to the rules then we have a system of morality.
Humans have a complex interplay of instinctive moral intuitions and cultural norms that together form a moral framework that exists because it provides a reasonably stable solution to living in mutually beneficial societies. That doesn’t mean it can’t be improved, just that its very existence implies that it works reasonably well.
The problem then with a moral dilemma that appears to present a wide gap between what people say should be done and what people would actually do is that it suggests a flaw in the moral framework. A stable framework will generally require that decisions that people can agree are right (in that we’d expect on average to be better off if we all followed them) are also decisions that people can plausibly commit to taking if faced with the problem. It’s like the pre-commitment problem discussed before on less wrong. You might wish to argue for an idealized morality that sets standards for what people should do that are not what most people would do but then you have to make a plausible case for why what people actually do is wrong. Further, I’d argue you have to make a case for how your system could actually be implemented with actual people in a stable fashion—an idealized morality that is not achievable with actual people is not very interesting to me.
Ultimately I don’t take a utilitarian view of morality—that what is ‘good’ is what maximizes utility across all agents. I take an ‘enlightened self interest’ view—that what is ‘good’ is what all agents can agree is a framework that will tend to lead to better expected outcomes for each individual if each individual constrains his own immediate self interest in certain ways.
There are heaps and heaps of consequentialist/utilitarian views that don’t maximize utility uncritically across everybody. It sounds like you prefer something in the neighborhood of agent-favoring morality, but ethical egoism is a consequentialist view too.
Based on discussions I’ve had here I get the impression that most people consider ‘utilitarianism’, unqualified, to imply equal weighting for all people in the utility function to be maximized. Even where equal weighting is not implied (the existence of the ‘utility monster’ as a problem for some variants acknowledges that weights are not necessarily equal) it seems that utilitarianism has a unique weighting for all agents and that what is ‘right’ is what maximizes some globally agreed upon utility function. I don’t accept either premise so I’m fairly sure I’m not a utilitarian.
It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies. I haven’t found a description of an ethical theory that I feel comfortable identifying my views with so far, though ethical egoism seems somewhat close from the little I’ve read on Wikipedia (it’s what I ended up putting down on Yvain’s survey).
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
You may find this paper on consequentialism and decision procedures interesting.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.