Within moral philosophy, at least, there are two related senses in which philosophers’ typical practice of thought-experiments can seem ill-advised:
They may deal with situations that are strongly unlike the situations in which we actually need to make decisions. Perhaps you’ll never be faced with a runaway trolley, with decisions concerning 3^^^3 dust specks, or with any decision simple enough that you can easily apply your thinking about trolley problems or dust specks.
They may highlight situations that disorient or break our moral intuitions or our notions of value.
To elaborate a plausible mechanism: The human categories “birds”, “vegetables” (vs. “fruits”, or “herbs”), and “morally right” are all better understood as family resemblance terms (capturing “clusters in thingspace”) than as crisp, explicitly definable, schematic categories that entities do or don’t fall into. Such family resemblance terms arguably gain their meaning, in our heads, from our exposure to many different central examples. Show a person carrots, mushrooms, spinach, and broccoli, with a “yes these are Xes”, and strawberries, cayenne, and rice with a “these aren’t Xes”, and the person will construct the concept “vegetables”. Add in a bunch of borderline cases (“are mustard greens a vegetable or an herb? what exact features point toward and against?”) and the person’s notion of “vegetable” will lose its some of its intuitive “is a category”-ness. If there are enough borderline examples in their example-space, “vegetable” won’t be a cluster for them anymore.
“Is morally right” may similarly be a cluster formed by seeing what kinds of intra- and inter-personal situations work well, or can be expected to be judged well, and may break or weaken when faced with non-“ecologically valid” thought-experiments.
I spent two years in a graduate philosophy department before leaving academic philosophy to try to reduce existential risks. In my grad philosophy courses, I used to express disdain for dust specks vs. torture type problems, and to claim arguments along the lines of both (1) and (2) for why I should fail to engage such questions. My guess is that (2) was my actual motivation—I could feel aspects of my moral concern breaking when I considered trolley problems and the like—and, having not read OB, and tending to believe that arguments were like soldiers, I then argued for (1) as well.
When I left philosophy, though, and started actually thinking about what kind of a large-scale world we want, I was surprised to find that the discussions I’d claimed were inapplicable (with argument (1)) were glaringly applicable. If you’re considering what people shouldn’t tile the light-cone with, or even if you’re just considering aid to Africa, large-scale schematic beliefs about how to navigate tradeoffs are, in fact, a better guide than are folk moral intuitions about what a good friendship looks like. The central examples around which human moral intuitions are built just don’t work well for some of the most important decisions we do in fact need to make.
But despite its inconvenience, (2) may in fact pose a problem, AFAICT.
I’d support idealized thought experiments even if the world were boring. The answers to boring moral problems come or should come from some process you can decompose into several simple modular parts, and these parts can be individually refined on idealized examples in a way that’s cleaner and safer than refining the whole of them together on realistic examples. Not letting answers to thought experiments leak into superficially similar real situations takes a kind of discipline, but it’s worth it for people to build this discipline.
large-scale schematic beliefs about how to navigate tradeoffs are, in fact, a better guide than are folk moral intuitions about what a good friendship looks like
Not at all. Our ‘folk moral intuitions’ tell us right quick that we shouldn’t tile the light-cone with anything, and I’d need quite a bit of convincing to think otherwise. Similarly, considering aid to Africa can be dealt with entirely within our ‘folk moral intuitions’, and to think otherwise I’m pretty sure you’d have to beg the question in favor of ‘large-scale schematic beliefs about how to navigate tradeoffs’.
That said, I agree wholeheartedly with (1) and (2). Part of the analysis of (1) involves the nature of observation. Intuitions are a sort of observation, and in really strange situations our observations can be confused and fail to match up with reality. While we can rely on our moral intuitions in situations we actually find ourselves facing every day, ‘desert island cases’ confuse our moral faculties so we shouldn’t necessarily trust our intuitions in them. Of course, this starts bleeding into (2).
considering aid to Africa can be dealt with entirely within our ‘folk moral intuitions’
This is an issue that our folk moral intuitions can get horribly wrong. It’s a lot easier to think “people in Africa are suffering, so it’s morally right to help them” than to ask “is X actually going to help them?” and harder still to figure out which intervention will help the most. The difference (from a consequentialist perspective) between efficient charity and average charity is probably much larger than the difference between average charity and no charity.
This is an issue that our folk moral intuitions can get horribly wrong. It’s a lot easier to think “people in Africa are suffering, so it’s morally right to help them” than to ask “is X actually going to help them?”
This is true, but in this case what is going wrong is our intuitions about instrumental values, not moral ones. I think thomblake was talking about whether our folk moral intuitions could determine whether it was a good or bad thing if we did something that resulted in less suffering in Africa. Our intuitions about how to effectively accomplish that goal are a whole different beast.
Within moral philosophy, at least, there are two related senses in which philosophers’ typical practice of thought-experiments can seem ill-advised:
They may deal with situations that are strongly unlike the situations in which we actually need to make decisions. Perhaps you’ll never be faced with a runaway trolley, with decisions concerning 3^^^3 dust specks, or with any decision simple enough that you can easily apply your thinking about trolley problems or dust specks.
They may highlight situations that disorient or break our moral intuitions or our notions of value.
To elaborate a plausible mechanism: The human categories “birds”, “vegetables” (vs. “fruits”, or “herbs”), and “morally right” are all better understood as family resemblance terms (capturing “clusters in thingspace”) than as crisp, explicitly definable, schematic categories that entities do or don’t fall into. Such family resemblance terms arguably gain their meaning, in our heads, from our exposure to many different central examples. Show a person carrots, mushrooms, spinach, and broccoli, with a “yes these are Xes”, and strawberries, cayenne, and rice with a “these aren’t Xes”, and the person will construct the concept “vegetables”. Add in a bunch of borderline cases (“are mustard greens a vegetable or an herb? what exact features point toward and against?”) and the person’s notion of “vegetable” will lose its some of its intuitive “is a category”-ness. If there are enough borderline examples in their example-space, “vegetable” won’t be a cluster for them anymore.
“Is morally right” may similarly be a cluster formed by seeing what kinds of intra- and inter-personal situations work well, or can be expected to be judged well, and may break or weaken when faced with non-“ecologically valid” thought-experiments.
I spent two years in a graduate philosophy department before leaving academic philosophy to try to reduce existential risks. In my grad philosophy courses, I used to express disdain for dust specks vs. torture type problems, and to claim arguments along the lines of both (1) and (2) for why I should fail to engage such questions. My guess is that (2) was my actual motivation—I could feel aspects of my moral concern breaking when I considered trolley problems and the like—and, having not read OB, and tending to believe that arguments were like soldiers, I then argued for (1) as well.
When I left philosophy, though, and started actually thinking about what kind of a large-scale world we want, I was surprised to find that the discussions I’d claimed were inapplicable (with argument (1)) were glaringly applicable. If you’re considering what people shouldn’t tile the light-cone with, or even if you’re just considering aid to Africa, large-scale schematic beliefs about how to navigate tradeoffs are, in fact, a better guide than are folk moral intuitions about what a good friendship looks like. The central examples around which human moral intuitions are built just don’t work well for some of the most important decisions we do in fact need to make.
But despite its inconvenience, (2) may in fact pose a problem, AFAICT.
I’d support idealized thought experiments even if the world were boring. The answers to boring moral problems come or should come from some process you can decompose into several simple modular parts, and these parts can be individually refined on idealized examples in a way that’s cleaner and safer than refining the whole of them together on realistic examples. Not letting answers to thought experiments leak into superficially similar real situations takes a kind of discipline, but it’s worth it for people to build this discipline.
Not at all. Our ‘folk moral intuitions’ tell us right quick that we shouldn’t tile the light-cone with anything, and I’d need quite a bit of convincing to think otherwise. Similarly, considering aid to Africa can be dealt with entirely within our ‘folk moral intuitions’, and to think otherwise I’m pretty sure you’d have to beg the question in favor of ‘large-scale schematic beliefs about how to navigate tradeoffs’.
That said, I agree wholeheartedly with (1) and (2). Part of the analysis of (1) involves the nature of observation. Intuitions are a sort of observation, and in really strange situations our observations can be confused and fail to match up with reality. While we can rely on our moral intuitions in situations we actually find ourselves facing every day, ‘desert island cases’ confuse our moral faculties so we shouldn’t necessarily trust our intuitions in them. Of course, this starts bleeding into (2).
This is an issue that our folk moral intuitions can get horribly wrong. It’s a lot easier to think “people in Africa are suffering, so it’s morally right to help them” than to ask “is X actually going to help them?” and harder still to figure out which intervention will help the most. The difference (from a consequentialist perspective) between efficient charity and average charity is probably much larger than the difference between average charity and no charity.
This is true, but in this case what is going wrong is our intuitions about instrumental values, not moral ones. I think thomblake was talking about whether our folk moral intuitions could determine whether it was a good or bad thing if we did something that resulted in less suffering in Africa. Our intuitions about how to effectively accomplish that goal are a whole different beast.
Yes exactly