I don’t think so, if I understand Alicorn correctly.
Alicorn says that a “consequentialist doppelganger”
applies the following transformation to some non-consequentialist theory X:
What would the world look like if I followed theory X?
You ought to act in such a way as to bring about the result of step 1.
But that’s not what Peterson is doing. Instead, his approach (along with several previous, incomplete and failed attempts to do this) merely captures whatever rules and considerations the deontologist cares about in what a decision-theoretic agent (a consequentialist) calls the “outcome.” For example, the agent’s utility function can be said to assign very, very low utility to an outcome in which (1) the agent has just lied, or (2) the agent has just broken a promise previously sworn to, or (3) the agent has just violated the rights of a being that counts as a moral agent according to criterion C. Etc.
What is the important difference between (1) assigning low utilities to outcomes in which the agent has just lied, and (2) attempting consequentialitically to make the world look just like it did if the agent doesn’t lie? I mean, surely the way you do #2 is precisely by assigning low utilities to outcomes in which the agent lies, no?
Isn’t this a consequentialist doppelganger?
I don’t think so, if I understand Alicorn correctly.
Alicorn says that a “consequentialist doppelganger”
But that’s not what Peterson is doing. Instead, his approach (along with several previous, incomplete and failed attempts to do this) merely captures whatever rules and considerations the deontologist cares about in what a decision-theoretic agent (a consequentialist) calls the “outcome.” For example, the agent’s utility function can be said to assign very, very low utility to an outcome in which (1) the agent has just lied, or (2) the agent has just broken a promise previously sworn to, or (3) the agent has just violated the rights of a being that counts as a moral agent according to criterion C. Etc.
What is the important difference between (1) assigning low utilities to outcomes in which the agent has just lied, and (2) attempting consequentialitically to make the world look just like it did if the agent doesn’t lie? I mean, surely the way you do #2 is precisely by assigning low utilities to outcomes in which the agent lies, no?