I don’t hold to that one deontological morality. I think Jean Valjean was right to steal the bread. I think values/rules/duties tend to conflict, and resolution of such conflicts need values/rules/duties to be arranged hierarchically. Thus the rightness of preventing his nephews starvation overrides the wrongness of stealing the bread. ( “However, there is a difference between deontological ethics and moral absolutism” )
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
I don’t have to have an exact morality to be sceptical of the idea that consequentialism is the One True Theory.
This reply does not fit the context. If Will is asked to instantiate from a general principle to a specific example then it is not reasonable to declare the general principle null because the specific example does not apply to the morality you happen to be thinking of.
(And the “One True Theory” business a far less subtle straw man.)
If it’s OK to make a transition because of the nature of the transition (it’s an action which follows certain rules, respects certain rights, arises from certain intentions). then there is no need to re-explain the ordering of A, B and C in terms of anything about the states themselves—the ordering is derived from the transitions.
But if the properties of the transitions can be derived from the properties of the states, then it’s so much SIMPLER to talk about good states than good transitions.
Simplicity is tangential here; we are discussing what is right, not how to most efficiently determine it.
In what circumstances do you two actually disagree as to what one should do (I expect Peter to be more likely to answer this well as he is more familiar with typical LessWrongian utilitarianisms than Will is with Peter’s particular deontology)?
If those axioms hold, then a consequentialist moral framework is right.
You can argue that those axioms hold and yet consequentialism is not the One True Moral Theory, but it seems like an odd position to take on a purely definitional level.
(also, Robert Nozick violates those axioms, if anyone still cares about Robert Nozick, and the bag-of-gold example works on him)
If those axioms hold, then a consequentialist moral framework is right.
I don’t see why. Why would the existence of an ordering of states be a sufficient condition for consequentualism? And didn’t you need the additional argument about simplicity to make that work?
And if i can show that consequentialism needs to be combined with rules (or something else), does that prove consequentialism is really deontology (or something else)? It is rather easy to show that any one-legged approach is flawed, but if end up with a mixed theory we should not label it as a one-legged theory.
Considering that: this whole discussion was about how Robert Nozick isn’t (wasn’t?) a consequentialist, I think for these purposes we should classify his views as not consequentialism.
Perhaps an example of what I mean will be helpful.
Suppose your friend is kidnapped and being held for ransom. Naive consequentialism says you should pay because you value his life more then the money. TDT says you shouldn’t pay because paying counterfactually causes him to be kidnapped.
Note how in the scenario the TDT argument sounds very deontological.
“Consequences” only in a counterfactual world. I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system. In particular by your definition Kant’s categorical imperative is consequentialist since it involves looking at the consequences of your actions in the hypothetical world where everyone performs them.
Yes, in that TDT-like decision/ethical theories are basically “consequentialism in which you must consider ‘acausal consequences’”.
While it may seem strange to regard ethical theories that apply Kant’s CI as “consequentialist”, it’s even stranger to call them deontological, because there is no deontic-like “rule set” they can be said to following; it’s all simple maximization, albeit with a different definition of what you count as a benefit. TDT, for example, considers not only what your action causes (in the technical sense of future results), but the implications of the decision theory you instantiate having a particular output.
(I know there are a lot of comments I need to reply to, I will get to them, be patient.)
While it may seem strange to regard ethical theories that apply Kant’s CI as “consequentialist”, it’s even stranger to call them deontological, because there is no deontic-like “rule set” they can be said to following;
It certainly is strange even if it is trivially possible. Any ‘consequentialist’ system can be implemented in a singleton deontological ‘rule set’. In fact, that’s the primary redeeming feature of deontology. Kind of like the best thing about Java is that you can use it to implement JRuby and bypass all of Java’s petty restrictions and short sighted rigidly enforced norms.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I’m not sure this is right).
In both cases, while computing them you never assume anything which you know to be false
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen. Omega’s coin didn’t come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.
I think counterfactual mugging was originally brought up in the context of problems which TDT doesn’t solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn’t pay.
Well that might explain some of our miscommunication. I’ll go back and check.
Consequences” only in a counterfactual world. . I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system.
This makes sense using the first definition, at least, according to TDT it does.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
This is clearly using the first definition.
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don’t know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don’t know prior to making it.
Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don’t (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).
Let’s say I have to decide what to do at 2′o’clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don’t, then I won’t. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.
I’m not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb’s problem or Prisoner’s dilemma, but it breaks down for Transparent Newcomb or Parfit’s Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.
Well you always know that one of your counterfactuals is true.
There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.
Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn’t that represent a failure of Omega’s prediction?
It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you’d be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn’t actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don’t exist, which you can easily obtain in a variation on Transparent Newcomb.
When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn’t care about whether the agent “actually exists”, and focus on what it knows instead.
All I’m saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.
You could possibly fix that by saying Omega isn’t perfect, but his predictions are correlated enough with your decision to make precomittment possible.
That is not my understanding. The only necessary addition to physics is “any possible mechanism of varying any element in your model of the universe”. ie. You need physics and a tiny amount of closely related mathematics. That will give you a function that gives you every possible action → result pair.
I believe this only serves to strengthen your main point about the possibility of separating epistemic investigation from ethics entirely.
“any possible mechanism of varying any element in your model of the universe”.
That’s a decision theory. For instance, if you perform causal surgery, that’s CDT. If you change all computationally identical elements, that’s TDT. And so on.
That’s a decision theory. For instance, if you perform causal surgery, that’s CDT. If you change all computationally identical elements, that’s TDT. And so on.
I don’t agree. A decision theory will sometimes require the production of action result pairs, as is the case with CDT, TDT and any other the decision algorithm with a consequentialist component. Yet not all production of such pairs is a ‘decision theory’. A full mathematical model of every possible state to the outcomes produced is not a decision theory in any meaningful sense. It is just a solid understanding of all of physics.
On one hand we have (physics + the ability to consider counterfactuals) and on the other we have systems for choosing specific counterfactuals to consider and compare.
If you don’t have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?
If you don’t have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?
That is my point. That is what the decision theory is for!
This conversation is kinda pointless. Therefore, my response comes in a short version and a long version.
Short:
Sorry, that was unclear. I did not make the mistake your last post implies I made. I’m pretty sure you’ve made some mistakes, but they’re really minor. We have nothing left to discuss.
Long:
Sorry, that was unclear.
The first time I posted it, it was a response to Eugene. Then you responded, criticizing it. Then, finally, it appears like we agree, so I reassert my original claim to make sure. In that context, this response is strange:
Ok, and it is still a claim that doesn’t refute anything I have previously said.
I wasn’t trying to refute you with this claim, I was trying to refute Eugene, then you tried to refute the claim.
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
I don’t hold to that one deontological morality. I think Jean Valjean was right to steal the bread. I think values/rules/duties tend to conflict, and resolution of such conflicts need values/rules/duties to be arranged hierarchically. Thus the rightness of preventing his nephews starvation overrides the wrongness of stealing the bread. ( “However, there is a difference between deontological ethics and moral absolutism” )
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.
I don’t have to have an exact morality to be sceptical of the idea that consequentialism is the One True Theory.
This reply does not fit the context. If Will is asked to instantiate from a general principle to a specific example then it is not reasonable to declare the general principle null because the specific example does not apply to the morality you happen to be thinking of.
(And the “One True Theory” business a far less subtle straw man.)
Suppose you have a system with some set of states, such that changing from state A to state B is either OK or not OK.
Then assuming you accept:
then you get a preference order on the states. Presto, consequentialism.
If it’s OK to make a transition because of the nature of the transition (it’s an action which follows certain rules, respects certain rights, arises from certain intentions). then there is no need to re-explain the ordering of A, B and C in terms of anything about the states themselves—the ordering is derived from the transitions.
But if the properties of the transitions can be derived from the properties of the states, then it’s so much SIMPLER to talk about good states than good transitions.
Simplicity is tangential here; we are discussing what is right, not how to most efficiently determine it.
In what circumstances do you two actually disagree as to what one should do (I expect Peter to be more likely to answer this well as he is more familiar with typical LessWrongian utilitarianisms than Will is with Peter’s particular deontology)?
Well, a better way to frame what I said is:
If those axioms hold, then a consequentialist moral framework is right.
You can argue that those axioms hold and yet consequentialism is not the One True Moral Theory, but it seems like an odd position to take on a purely definitional level.
(also, Robert Nozick violates those axioms, if anyone still cares about Robert Nozick, and the bag-of-gold example works on him)
I don’t see why. Why would the existence of an ordering of states be a sufficient condition for consequentualism? And didn’t you need the additional argument about simplicity to make that work?
So consequentialism says “doing right is making good”. But it doesn’t say what “making good” means. So it’s a family of moral theories.
What moral theories are part of the consequentialist family? All theories that can be expressed as “doing right is making X” for some X.
If I show that your moral theory can be expressed in that manner, I show that you are, in this sense, a consequentialist.
And if i can show that consequentialism needs to be combined with rules (or something else), does that prove consequentialism is really deontology (or something else)? It is rather easy to show that any one-legged approach is flawed, but if end up with a mixed theory we should not label it as a one-legged theory.
Then you should end up violating one of the axioms and getting a not-consequentialism.
All consequentialist theories produce a set of rules.
The right way to define “deontology”, then, is a theory that is a set of rules that couldn’t be consequentialist.
if you mix consequentialism and deontology, you get deontology.
If you mix consequentialism and deontology you get Nozickian side-constraints consequentialism.
Good example. You could have consequnentialism about what you should do, and deontology about what you should refrain from.
Considering that: this whole discussion was about how Robert Nozick isn’t (wasn’t?) a consequentialist, I think for these purposes we should classify his views as not consequentialism.
Would you count Timeless Decision Theory as deontological since it isn’t pure consequentialism?
No, it’s a decision theory, not an ethical theory.
I don’t understand the distinction you’re making.
Decision theories tell you what options you have: Pairs of actions and results.
Ethical theories tells you which options are superior.
Perhaps an example of what I mean will be helpful.
Suppose your friend is kidnapped and being held for ransom. Naive consequentialism says you should pay because you value his life more then the money. TDT says you shouldn’t pay because paying counterfactually causes him to be kidnapped.
Note how in the scenario the TDT argument sounds very deontological.
It sounds deontological, but it isn’t. It’s consequentialist. It evaluates options according to their consequences.
“Consequences” only in a counterfactual world. I don’t see how you can call this consequentialist without streching the term to the point that it could include nearly any morality system. In particular by your definition Kant’s categorical imperative is consequentialist since it involves looking at the consequences of your actions in the hypothetical world where everyone performs them.
Yes, in that TDT-like decision/ethical theories are basically “consequentialism in which you must consider ‘acausal consequences’”.
While it may seem strange to regard ethical theories that apply Kant’s CI as “consequentialist”, it’s even stranger to call them deontological, because there is no deontic-like “rule set” they can be said to following; it’s all simple maximization, albeit with a different definition of what you count as a benefit. TDT, for example, considers not only what your action causes (in the technical sense of future results), but the implications of the decision theory you instantiate having a particular output.
(I know there are a lot of comments I need to reply to, I will get to them, be patient.)
It certainly is strange even if it is trivially possible. Any ‘consequentialist’ system can be implemented in a singleton deontological ‘rule set’. In fact, that’s the primary redeeming feature of deontology. Kind of like the best thing about Java is that you can use it to implement JRuby and bypass all of Java’s petty restrictions and short sighted rigidly enforced norms.
Both CDT and TDT compare counter-factuals, they just take their counter-factual from different points in the causal graph.
In both cases, while computing them you never assume anything which you know to be false, whereas Kant is not like that. (Just realised, I’m not sure this is right).
Counterfactual mugging and the ransom problem I mentioned in the great-grandparent are both cases where TDT requires you to consider consequences of counterfactuals you know didn’t happen. Omega’s coin didn’t come up heads, and your friend has been kidnapped. Nevertheless you need to consider the consequences of your policy in those counterfactual situations.
I think counterfactual mugging was originally brought up in the context of problems which TDT doesn’t solve, that is it gives the obvious but non-optimal answer. The reason is that regardless of my counterfactual decision Omega still flips the same outcome and still doesn’t pay.
There are two rather different things both going under the name counterfactuals.
One is when I think of what the world would be like if I did something that I’m not going to do.
Another is when I think of what the world would be like if something not under my control had happened differently, and how my actions affect that.
They’re almost orthogonal, so I question the utility of using the same word.
Well, I’ve been consistently using the word “conterfactual” in your second sense.
Well that might explain some of our miscommunication. I’ll go back and check.
This makes sense using the first definition, at least, according to TDT it does.
This is clearly using the first definition.
This only makes sense with the second, and should probably be UDT rather than TDT—the original TDT didn’t get the right answer on the counterfactual mugging.
Sorry, I meant something closer to UDT.
Alright cool. So I think that’s what’s going on—we all agree but were using different definitions of counterfactuals.
You need a proof-system to ensure that you never assume anything which you know to be false.
ADT and some related theories have achieved this. I don’t think TDT has.
What I meant by that statement was the idea that CDT works by basing counterfactuals on your action, which seems a reasonable basis for counterfactuals since prior to making your decision you obviously don’t know what your action will be. TDT similarly works by basing counterfactuals on your decision, which you also don’t know prior to making it.
Kant, on the other hand, bases his counter-factuals on what would happen if everyone did that, and it is possible that his will involve assuming things I know to be false in a sense that CDT and TDT don’t (e.g. when deciding whether to lie I evaluate possible worlds in which everyone lies and in which everyone tells the truth, both of which I know not to be the case).
Well here is the issue.
Let’s say I have to decide what to do at 2′o’clock tomorrow. If I light a stick of dynamite, I will be exploded. If I don’t, then I won’t. I can predict that I will, in fact, not light a stick of dynamite tomorrow. I will then know that one of my counterfactuals is true and one is false.
This can mess up the logic of decision-making. There are http://lesswrong.com/lw/2l2/what_a_reduction_of_could_could_look_like/. This ensures that you can never figure out a decision before making it, which makes things simpler.
I’m not sure if this contradicts what you’ve said.
And I would agree exactly with your analysis about what’s wrong with Kant, and how that’s different from CDT and TDT.
I’m not sure I agree with myself. I think my analysis makes sense for the way TDT handles Newcomb’s problem or Prisoner’s dilemma, but it breaks down for Transparent Newcomb or Parfit’s Hitch-hiker. In those cases, owing to the assistance of a predictor, it seems like it is actually possible to know your decision in advance of making it.
Well you always know that one of your counterfactuals is true.
and Transparent Newcomb is a bit weird because one of the four possible strategies just explodes it.
There is no need to make that assumption. The whole collection of possible decisions could be located on an impossible counterfactual. Incidentally, this is one way of making sense of Transparent Newcomb.
Would you ever actually be in a situation where you chose an action tied to an impossible counterfactual? Wouldn’t that represent a failure of Omega’s prediction?
And since you always choose an action...
It matters what you do when you are in an actually impossible counterfactual, because when earlier you decide what decision theory you’d be using in that counterfactual, you might yet not know that it is impossible, and so you need to precommit to act sensibly even in the situation that doesn’t actually exist (not that you would know that if you get in that situation). Seriously. And sometimes you take an action that determines the fact that you don’t exist, which you can easily obtain in a variation on Transparent Newcomb.
When you make the precommitment-to-business-as-usual conversion, you get a principle that decision theory shouldn’t care about whether the agent “actually exists”, and focus on what it knows instead.
Yes. The actually impossible counterfactuals matter. All I’m saying is that the possible counterfactuals exist.
If you took such an action, wouldn’t you not exist? I request elaboration.
(You’ve probably misunderstood, I edited for clarity; will probably reply later, if that is not an actually impossible event.)
New reply: Yes, I agree.
All I’m saying is that when you actually make choices in reality, the counterfactual you end up using will happen. When a real Kant-Decision-Theory user makes choices, his favorite counterfactual will fail to actually occur.
You could possibly fix that by saying Omega isn’t perfect, but his predictions are correlated enough with your decision to make precomittment possible.
Yes. However that decision theory is wrong and dumb so we can ignore it. In particular, it never produces factuals, only counterfactuals.
You don’t need decision theories for that. You can get that far with physics and undirected imagination.
How about this:
Physics tells you pairs of actions and results.
Ethical theories tell you what results to aim for.
Decision theories combine the two.
That’s only true if you’re a human being.
That is not my understanding. The only necessary addition to physics is “any possible mechanism of varying any element in your model of the universe”. ie. You need physics and a tiny amount of closely related mathematics. That will give you a function that gives you every possible action → result pair.
I believe this only serves to strengthen your main point about the possibility of separating epistemic investigation from ethics entirely.
That’s a decision theory. For instance, if you perform causal surgery, that’s CDT. If you change all computationally identical elements, that’s TDT. And so on.
I don’t agree. A decision theory will sometimes require the production of action result pairs, as is the case with CDT, TDT and any other the decision algorithm with a consequentialist component. Yet not all production of such pairs is a ‘decision theory’. A full mathematical model of every possible state to the outcomes produced is not a decision theory in any meaningful sense. It is just a solid understanding of all of physics.
On one hand we have (physics + the ability to consider counterfactuals) and on the other we have systems for choosing specific counterfactuals to consider and compare.
If you don’t have a system to choose specific counterfactuals, that leaves you with all counterfactuals, that is, all world-histories, theoretically possible and not. How do you use that list to make decisions?
That is my point. That is what the decision theory is for!
I reassert my claim that:
Your null-decision theory doesn’t tell you what options you have. It tells you what options you would have, were you God.
This is a claim about definitions. You don’t seem to disagree with wedrifid on any question of substance in this thread.
Ok, and it is still a claim that doesn’t refute anything I have previously said. This conversation is going nowhere. exit(5)
Exit totally reasonable. I just need to point out one thing:
It wasn’t a claim in response to anything you said. It was a response to Eugene Nier.
It would have made more sense to me if it was made in reply to the relevant comment by Eugene.
This conversation is kinda pointless. Therefore, my response comes in a short version and a long version.
Short:
Sorry, that was unclear. I did not make the mistake your last post implies I made. I’m pretty sure you’ve made some mistakes, but they’re really minor. We have nothing left to discuss.
Long:
Sorry, that was unclear.
The first time I posted it, it was a response to Eugene. Then you responded, criticizing it. Then, finally, it appears like we agree, so I reassert my original claim to make sure. In that context, this response is strange:
I wasn’t trying to refute you with this claim, I was trying to refute Eugene, then you tried to refute the claim.
Requiring me to think up the example before telling me the exact nature of your morality is unfair.
If telling me the exact nature is difficult enough to be a bad idea, we probably just need to terminate the discussion, but I can also talk about how this kind of principle can be formalized into a dutch-book-like argument.