Did you convinve me that nothing is morally right, or that all utilities are 0.
If you convinced me that there is no moral rightness, I would be less inclined to take action to promote the things I currently consider abstract goods, but would still be moved by my desires and reactions to my immediate circumstances.
If you did persuade me that nothing has any value, I suspect that, over time, my desires would slowly convince me that things had value again.
If, ‘convincing’ includes an effect on my basic desires (as opposed to my inferrentially derived) then I would would not be moved to act in any cognitively mediated way (though I may still exhibit behaviors with non-cognitive causes).
...it has been shown in countless experiments that people do not behave in accordance with this theorem. So what conclusions do you want to draw from this?
...you do realise there are many problems with rational choice theory right? See chapter 3 and 4 from ‘Philosophy of Economics: A Contemporary Introduction’ by Julian Reiss for a brief introduction to the theory’s problems. If you can’t get your hands on that, see lectures 4-6 from Philosophy of Economics: Theory, Methods, and Values http://jreiss.org/jreiss.org/Teaching.html for an even briefer introduction.
I’m going to take a look at the lectures you linked later.
For now:
...what has this got to do with morality?
Your morals are your preferences; if you say that doing A is more moral than doing B, you prefer doing A to B (barring cognitive dissonance). So if preferences can be reduced to utilities, morality can be too.
In fact, you’d have to argue that the axioms don’t apply to morality, and justify that position.
I highly doubt that morals are preferences, with or without what you (assumedly loosely) term cognitive dissonance. One can have morals that aren’t preferences:
If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn’t prefer that—one might be rather pleased that only oneself will get into heaven by the following the rules.
One might believe things, events or people are morally “good” or “bad” without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn’t exist.
One can believe one ought to do something, without wanting to do it. This is seen very often in most people.
And one can obviously have preferences which aren’t morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so.
We should also be wary of equivocating on what we mean by “preferences”. Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don’t do, and thus means most of the usages of the word “preference” above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as “concious desire” pretty often.
If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn’t prefer that—one might be rather pleased that only oneself will get into heaven by the following the rules. One might believe things, events or people are morally “good” or “bad” without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn’t exist. One can believe one ought to do something, without wanting to do it. This is seen very often in most people.
I’m talking about personal morals here, i.e. “what should I do”, which are the only ones that matter for my own decision making. For my own actions, the theorem shows that there must be some utility function that captures my decision-making, or I am irrational in some way.
Even if preferences are distinct from morals, each will still be expressible by a utility function or fail some axiom.
And one can obviously have preferences which aren’t morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so.
That example is one where the errors are so low that it doesn’t make sense to spend time thinking about it. If you value your happiness and consider it good, then you ought to eat the chocolate, but it may represent so little utility that it uses more just to figure that out.
We should also be wary of equivocating on what we mean by “preferences”. Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don’t do, and thus means most of the usages of the word “preference” above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as “concious desire” pretty often.
When I say preference I mean “what state do you want the world to be in”. The problem of akrasia is well known, and it means that our actions don’t always express our preferences.
Preferences should be over outcomes, while actions are not. An imbalance can be akrasia, or the result of a misprediction.
Regardless of how you define preference, if it meets the axioms then it can be expressed as a utility function. So every form of preference corresponds to different utility functions, whether it’s revealed, actual, or some other thing.
Oh, so now you’re just talking about personal morals. One of my examples already covered that: ‘One can believe one ought to do something, without wanting to do it’. Why the presumption that utility functions capture decision-making? You acknowledge that preferences and hence utilities don’t always lead to decisions. And why the assumption that not meeting the axioms of rational choice theory makes you irrational? Morality might not even be appropriately described by the axioms of rational choice theory; how can you express everyone’s moral beliefs as real numbers? On the chocolate example, I can think I ought not eat the chocolate, but nevertheless prefer to eat it, and even actually eat; so your counterargument does not work. Given that you are not claiming all preferences meet the axioms—only “rational” preferences do (where’s your support?) - you cannot say ‘every form of preference corresponds to different utility functions, whether it’s revealed, actual, or some other thing’. And again, we ought to ask ourselves whether preferences or rational preferences are actually the right sort of thing to be expressed by the axioms; can they really be expressed as real numbers?
Which axiom do you think shouldn’t apply? If you can’t give me an argument why not to agree with any given axiom, then why shouldn’t I use them?
Given that you are not claiming all preferences meet the axioms—only “rational” preferences do (where’s your support?) - you cannot say ‘every form of preference corresponds to different utility functions, whether it’s revealed, actual, or some other thing’.
Obviously, if I prefer X to Y, and also prefer Y to X, then I’m being incoherent and that can’t be captured by a utility function. I expressly outlaw those kind of preferences.
Argue for a specific form of preference that violates the axioms.
If you can’t give me an argument as to why all your axioms apply, then why should I accept any of your claims?
A specific form of preference that violates the axioms? Any preference which is “irrational” under those axioms, and you already acknowledged preferences of that sort existed.
If you can’t give me an argument as to why all your axioms apply, then why should I accept any of your claims?
I see no counterexamples to any of the axioms. If they’re so wrong, you should be able to come up with a set of preferences that someone could actually support.
A specific form of preference that violates the axioms? Any preference which is “irrational” under those axioms, and you already acknowledged preferences of that sort existed.
You need to argue that those are useful in some sense. Preferring A over B and B over A doesn’t follow the axioms, but I see no reason to use such systems. Is that really your position, that coherence and consistency don’t matter?
Any preference which is “irrational” under those axioms, and you already acknowledged preferences of that sort existed.
As an extremely basic example: I could prefer chocolate ice cream over vanilla ice cream, and prefer vanilla ice cream over pistachio ice cream. Under the Von Neumann-Morgenstein axioms, however, I cannot then prefer pistachio to chocolate because that would violate the transitivity axiom. You are correct that there is probably someone out there who holds all three preferences simultaneously. I would call such a person “irrational”. Wouldn’t you?
Did you convinve me that nothing is morally right, or that all utilities are 0.
If you convinced me that there is no moral rightness, I would be less inclined to take action to promote the things I currently consider abstract goods, but would still be moved by my desires and reactions to my immediate circumstances.
If you did persuade me that nothing has any value, I suspect that, over time, my desires would slowly convince me that things had value again.
If, ‘convincing’ includes an effect on my basic desires (as opposed to my inferrentially derived) then I would would not be moved to act in any cognitively mediated way (though I may still exhibit behaviors with non-cognitive causes).
Why the assumption that morality is analysable with utilities?
https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem
...it has been shown in countless experiments that people do not behave in accordance with this theorem. So what conclusions do you want to draw from this?
...you do realise there are many problems with rational choice theory right? See chapter 3 and 4 from ‘Philosophy of Economics: A Contemporary Introduction’ by Julian Reiss for a brief introduction to the theory’s problems. If you can’t get your hands on that, see lectures 4-6 from Philosophy of Economics: Theory, Methods, and Values http://jreiss.org/jreiss.org/Teaching.html for an even briefer introduction.
...what has this got to do with morality?
I’m going to take a look at the lectures you linked later.
For now:
Your morals are your preferences; if you say that doing A is more moral than doing B, you prefer doing A to B (barring cognitive dissonance). So if preferences can be reduced to utilities, morality can be too.
In fact, you’d have to argue that the axioms don’t apply to morality, and justify that position.
I highly doubt that morals are preferences, with or without what you (assumedly loosely) term cognitive dissonance. One can have morals that aren’t preferences:
If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn’t prefer that—one might be rather pleased that only oneself will get into heaven by the following the rules. One might believe things, events or people are morally “good” or “bad” without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn’t exist. One can believe one ought to do something, without wanting to do it. This is seen very often in most people.
And one can obviously have preferences which aren’t morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so.
We should also be wary of equivocating on what we mean by “preferences”. Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don’t do, and thus means most of the usages of the word “preference” above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as “concious desire” pretty often.
I’m talking about personal morals here, i.e. “what should I do”, which are the only ones that matter for my own decision making. For my own actions, the theorem shows that there must be some utility function that captures my decision-making, or I am irrational in some way.
Even if preferences are distinct from morals, each will still be expressible by a utility function or fail some axiom.
That example is one where the errors are so low that it doesn’t make sense to spend time thinking about it. If you value your happiness and consider it good, then you ought to eat the chocolate, but it may represent so little utility that it uses more just to figure that out.
When I say preference I mean “what state do you want the world to be in”. The problem of akrasia is well known, and it means that our actions don’t always express our preferences.
Preferences should be over outcomes, while actions are not. An imbalance can be akrasia, or the result of a misprediction.
Regardless of how you define preference, if it meets the axioms then it can be expressed as a utility function. So every form of preference corresponds to different utility functions, whether it’s revealed, actual, or some other thing.
Oh, so now you’re just talking about personal morals. One of my examples already covered that: ‘One can believe one ought to do something, without wanting to do it’. Why the presumption that utility functions capture decision-making? You acknowledge that preferences and hence utilities don’t always lead to decisions. And why the assumption that not meeting the axioms of rational choice theory makes you irrational? Morality might not even be appropriately described by the axioms of rational choice theory; how can you express everyone’s moral beliefs as real numbers? On the chocolate example, I can think I ought not eat the chocolate, but nevertheless prefer to eat it, and even actually eat; so your counterargument does not work. Given that you are not claiming all preferences meet the axioms—only “rational” preferences do (where’s your support?) - you cannot say ‘every form of preference corresponds to different utility functions, whether it’s revealed, actual, or some other thing’. And again, we ought to ask ourselves whether preferences or rational preferences are actually the right sort of thing to be expressed by the axioms; can they really be expressed as real numbers?
Which axiom do you think shouldn’t apply? If you can’t give me an argument why not to agree with any given axiom, then why shouldn’t I use them?
Obviously, if I prefer X to Y, and also prefer Y to X, then I’m being incoherent and that can’t be captured by a utility function. I expressly outlaw those kind of preferences.
Argue for a specific form of preference that violates the axioms.
If you can’t give me an argument as to why all your axioms apply, then why should I accept any of your claims?
A specific form of preference that violates the axioms? Any preference which is “irrational” under those axioms, and you already acknowledged preferences of that sort existed.
I see no counterexamples to any of the axioms. If they’re so wrong, you should be able to come up with a set of preferences that someone could actually support.
You need to argue that those are useful in some sense. Preferring A over B and B over A doesn’t follow the axioms, but I see no reason to use such systems. Is that really your position, that coherence and consistency don’t matter?
As an extremely basic example: I could prefer chocolate ice cream over vanilla ice cream, and prefer vanilla ice cream over pistachio ice cream. Under the Von Neumann-Morgenstein axioms, however, I cannot then prefer pistachio to chocolate because that would violate the transitivity axiom. You are correct that there is probably someone out there who holds all three preferences simultaneously. I would call such a person “irrational”. Wouldn’t you?