Defined in the previous sentence. A traditional or ‘exclusively act-oriented’ theory is concerned with which option an agent chooses, where the ‘options’ are understood as actions like pushing or not-pushing—acts which can be specified independently of the motive from which they’re performed.
For an agent to satisfy the requirements of a traditional theory like Act Consequentialism, they simply need to perform the right action. The contrast is with theories like CU which are not exclusively act-oriented, but require agents to actually use a particular decision procedure (and not simply act in the way that would be recommended by the procedure).
Not sure what gave you the impression that I have any such expectation. People may satisfy a moral theory just occasionally. On those occasions, we would expect the group who satisfy the theory to do as well as was possible, if the theory advertises itself as a consequentialist one. Surprisingly, it turns out that this is not so for the aforementioned class of such theories.
You’ll need to explain how that relates to my previous comment. I get the sense that we’re talking past each other (or at least starting from very different places).
The point is, your judging a decision theory based on the results for agents that happen to do what it recommends rather for agents that systematically do what it recommends because they actually compute what it recommends and then do that, is not a good way to judge decision theories, if you are judging them with the purpose of choosing one to systematically follow. In particular, a big problem with using the results for agents that happen to do what the decision theory recommends is that you don’t expect other agents to expect the agent you are considering to follow the decision theory in counterfactual computations they make that inform their own decisions, which in affect the outcome for the agent under consideration.
Thanks, that’s helpful. I’m actually not “judging them with the purpose of choosing one to systematically follow”—my interests are more theoretical than that (e.g., I’m interested in what sort of moral theory best represents the core idea of Consequentialism).
Having said that, I agree about the importance of counterfactuals here, and hence the importance of agents following a theory rather than merely conforming their behaviour to it—indeed, that’s precisely the point I was wanting to highlight from Regan’s classic work. (Note that this is actually a distinct point from systematicity across time: we can imagine a case where agents have reliable knowledge that just this once the other person is following the CU decision procedure.)
Defined in the previous sentence. A traditional or ‘exclusively act-oriented’ theory is concerned with which option an agent chooses, where the ‘options’ are understood as actions like pushing or not-pushing—acts which can be specified independently of the motive from which they’re performed.
For an agent to satisfy the requirements of a traditional theory like Act Consequentialism, they simply need to perform the right action. The contrast is with theories like CU which are not exclusively act-oriented, but require agents to actually use a particular decision procedure (and not simply act in the way that would be recommended by the procedure).
How do you expect agents to systematically act in the way recommended by a particular procedure without actually using that procedure?
Not sure what gave you the impression that I have any such expectation. People may satisfy a moral theory just occasionally. On those occasions, we would expect the group who satisfy the theory to do as well as was possible, if the theory advertises itself as a consequentialist one. Surprisingly, it turns out that this is not so for the aforementioned class of such theories.
It is not suprising at all that other agents’ counterfactual expectations of your behavior affects their behavior, which in turn can affect you.
You’ll need to explain how that relates to my previous comment. I get the sense that we’re talking past each other (or at least starting from very different places).
The point is, your judging a decision theory based on the results for agents that happen to do what it recommends rather for agents that systematically do what it recommends because they actually compute what it recommends and then do that, is not a good way to judge decision theories, if you are judging them with the purpose of choosing one to systematically follow. In particular, a big problem with using the results for agents that happen to do what the decision theory recommends is that you don’t expect other agents to expect the agent you are considering to follow the decision theory in counterfactual computations they make that inform their own decisions, which in affect the outcome for the agent under consideration.
Thanks, that’s helpful. I’m actually not “judging them with the purpose of choosing one to systematically follow”—my interests are more theoretical than that (e.g., I’m interested in what sort of moral theory best represents the core idea of Consequentialism).
Having said that, I agree about the importance of counterfactuals here, and hence the importance of agents following a theory rather than merely conforming their behaviour to it—indeed, that’s precisely the point I was wanting to highlight from Regan’s classic work. (Note that this is actually a distinct point from systematicity across time: we can imagine a case where agents have reliable knowledge that just this once the other person is following the CU decision procedure.)