I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?
I don’t judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn’t. It’s true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn’t more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you’re in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn’t evidence that I don’t want to go to the grocery store. That’s a confusing issue and I’m hoping we can assume for the purposes of discussion about morality that the people we’re talking about have true beliefs.
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
I’m not saying it’s a complete description of me. To describe how I think you’d also need a description of my possibly-false beliefs, and you’d also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn’t a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. “I want to go north” might really be “I believe the grocery store is north of here and I want to go to the grocery store”. “I want to go to the grocery store” might be a further conflation of preference and belief, such as “I want to get some food” and “I believe I will be able to get food at the grocery store”. Eventually you can unpack all the beliefs and get the true preference, which might be “I want to eat something interesting today”.
That still doesn’t explain what the difference between your prefernces and your biases is.
That’s rather startling. Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It’s a term we’re defining because it’s useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy’s breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person’s preferences, and preferences that don’t change over time tend to be simpler, but if that’s contradicted by observation you settle for different preferences at different times.
I suppose I should have said “If a preference changes as a consequence of reasoning or reflection, it wasn’t a preference”. If the context of the statement is lost, that distinction matters.
So you are defining “preference” in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
I agree! Consider, for instance, taste in particular foods. I’d say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you’re hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it’s pleasurable—but I think the proper level of unpacking is “experience drinking coffee”, not “experience pleasurable sensations”, because the experience being pleasurable is what makes it a preference in this case. That’s how it seems to me, at least. Am I missing something?