My shot at an answer: the goal is to derive a general principle or principles underlying a wide range of (potentially inconsistent) intuitions about what is ethical in a variety of situations.
We could ask people directly about their intuitions, or attempt to discern them from how they actually behave in practice.
Not sure whether this will be at all useful (and apologies if this is pitched either too low or too high—it’s always hard to judge these things) but to take a ridiculously pared down example, assume that we quiz two different people on their intuitions about the “goodness”, G, of a variety of actions they could perform in a range of different circumstances. To make things really simple, assume that these actions differ only in terms of their effect on total well-being, T, and on inequality in well-being, I. (Assume also that we magically have some set of reasonable scales with which to measure G, T, and I.)
An example of a parametric approach to discerning a “general principle” underlying all these intuitions would be to plot the {G,T,I} triples on a 3-dimensional scatter plot, and to find a plane that comes closest to fitting them all (e.g. by minimizing the total distance between the points and the plane). It would then be possible to articulate a general principle that says something like “goodness increases 2 units for every unit increase in total welfare, and decreases 1 unit for every unit increase in inequality.”
An example of a non-parametric approach would be to instead determine the goodness of an action simply by taking the average of the nearest two intuitions (which, assuming we have asked each individual the same questions, will just be the two individuals’ judgments about the closest action we’ve quizzed them on). In advance, it’s not clear whether there will be any easy way to articulate the resulting “general principle”. It might be that goodness sometimes increases with inequality and sometimes decreases, perhaps depending on the level of total well-being; it might be that we get something that looks almost exactly like the plane we would get from the previous parametric approach; or we might get something else entirely.
In reality of course, we’ve got billions of people with billions of different intuitions, we can’t necessarily measure G in any sensible way, and the possible actions we might take will differ in all sorts of complicated ways that I’ve totally ignored here. Indeed, deciding what sort of contextual information to pay attention to, and what to ignore could easily end up being more important than how we try to fit a hypersurface to the resulting data. In general though, the second sort of solution is likely to do better (a) the more data you have about people’s intuitions; (b) the less reasonable you think it is to assume a particular type of general principle; and (c) the less messed up you think case-specific intuitions are likely to be.
My shot at an answer: the goal is to derive a general principle or principles underlying a wide range of (potentially inconsistent) intuitions about what is ethical in a variety of situations.
We could ask people directly about their intuitions, or attempt to discern them from how they actually behave in practice.
Not sure whether this will be at all useful (and apologies if this is pitched either too low or too high—it’s always hard to judge these things) but to take a ridiculously pared down example, assume that we quiz two different people on their intuitions about the “goodness”, G, of a variety of actions they could perform in a range of different circumstances. To make things really simple, assume that these actions differ only in terms of their effect on total well-being, T, and on inequality in well-being, I. (Assume also that we magically have some set of reasonable scales with which to measure G, T, and I.)
An example of a parametric approach to discerning a “general principle” underlying all these intuitions would be to plot the {G,T,I} triples on a 3-dimensional scatter plot, and to find a plane that comes closest to fitting them all (e.g. by minimizing the total distance between the points and the plane). It would then be possible to articulate a general principle that says something like “goodness increases 2 units for every unit increase in total welfare, and decreases 1 unit for every unit increase in inequality.”
An example of a non-parametric approach would be to instead determine the goodness of an action simply by taking the average of the nearest two intuitions (which, assuming we have asked each individual the same questions, will just be the two individuals’ judgments about the closest action we’ve quizzed them on). In advance, it’s not clear whether there will be any easy way to articulate the resulting “general principle”. It might be that goodness sometimes increases with inequality and sometimes decreases, perhaps depending on the level of total well-being; it might be that we get something that looks almost exactly like the plane we would get from the previous parametric approach; or we might get something else entirely.
In reality of course, we’ve got billions of people with billions of different intuitions, we can’t necessarily measure G in any sensible way, and the possible actions we might take will differ in all sorts of complicated ways that I’ve totally ignored here. Indeed, deciding what sort of contextual information to pay attention to, and what to ignore could easily end up being more important than how we try to fit a hypersurface to the resulting data. In general though, the second sort of solution is likely to do better (a) the more data you have about people’s intuitions; (b) the less reasonable you think it is to assume a particular type of general principle; and (c) the less messed up you think case-specific intuitions are likely to be.