Those “three tiers” sound a little bit like another classification I found useful, in Baron’s “Thinking and Deciding”. These are the normative, prescriptive and descriptive questions about thinking and decision making.
Descriptive models account for how people actually decide. Experimental results on biases fit in there. Normative theories are about how we should think; standards by which actual decisions can be evaluated. Expected utility fits in there.
Prescriptive models bridge the gap between the two: they are rules about how we can improve everyday thinking by bringing our decisions closer to what normative theories would advise. They account for the practical concerns of decision making, e.g. in some cases our resources (time, brainpower, etc.) are too limited for an exact computation according to the normative theory.
“Pick[ing] the moral frameworks which are best at justifying [our] ethical intuitions” is discussed at length in Rawls’ Theory of Justice under the term “reflective equilibrium”. It doesn’t require that we hold our “ethical intuitions” as a fixed point; the process of working out (in advance and at leisure) consequences of normative models and comparing them with our intuitions may very well lead us to revise our intutions.
Reflective equilibrium is desirable from a prescriptive standpoint. When we are “in the thick of things” we usually will not have time to work out or moral positions on pen and paper, and will fall back on intuitions and heuristics. It is better to have trained those to yield the decisions we would wish ourselves to make if we could consider the situation in advance.
Those “three tiers” sound a little bit like another classification I found useful, in Baron’s “Thinking and Deciding”. These are the normative, prescriptive and descriptive questions about thinking and decision making.
Descriptive models account for how people actually decide. Experimental results on biases fit in there. Normative theories are about how we should think; standards by which actual decisions can be evaluated. Expected utility fits in there.
Prescriptive models bridge the gap between the two: they are rules about how we can improve everyday thinking by bringing our decisions closer to what normative theories would advise. They account for the practical concerns of decision making, e.g. in some cases our resources (time, brainpower, etc.) are too limited for an exact computation according to the normative theory.
“Pick[ing] the moral frameworks which are best at justifying [our] ethical intuitions” is discussed at length in Rawls’ Theory of Justice under the term “reflective equilibrium”. It doesn’t require that we hold our “ethical intuitions” as a fixed point; the process of working out (in advance and at leisure) consequences of normative models and comparing them with our intuitions may very well lead us to revise our intutions.
Reflective equilibrium is desirable from a prescriptive standpoint. When we are “in the thick of things” we usually will not have time to work out or moral positions on pen and paper, and will fall back on intuitions and heuristics. It is better to have trained those to yield the decisions we would wish ourselves to make if we could consider the situation in advance.