No, we don’t. That’s making recommendations as to how they can attain their preferences. That you don’t seem to understand this distinction is concerning. Instrumental and terminal values are different.
My position is in line with that—people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.
Fundamentally, because pleasure feels good and preferable, and it doesn’t need anything additional (such as conditioning through social norms) to make it desirable.
It’s not a matter of what you should desire, it’s a matter of what you’d desire if you were internally consistent. Theoretically, you could have values that weren’t pleasure, such as if you couldn’t experience pleasure.
Also, the naturalistic fallacy isn’t a fallacy, because “is” and “ought” are bound together.
(Note: Being continuously downvoted is making me reluctant to continue this discussion.)
One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other.
Why should the way things are be the way things are?
As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.
Retracted: Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.
As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.
I very much disagree. I think you’re couching this deontological moral stance as something more than the subjective position that it is. I find your morals abhorrent, and your normative statements regarding others’ preferences to be alarming and dangerous.
Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.
You can be Dutch booked with preferences too. If you prefer A to B, B to C, and C to A, I can make money off of you by offering a circular trade to you.
And if I’m unaware that such a strategy is taking place. Even if I was aware, I am a dynamic system evolving in time, and I might be perfectly happy with the expenditure per utility shift.
Unless I was opposed to that sort of arrangement, I find nothing wrong with that. It is my prerogative to spend resources to satisfy my preferences.
I might be perfectly happy with the expenditure per utility shift.
That’s exactly the problem—you’d be happy with the expenditure per shift, but every time a fill cycle would be made, you’d be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you’d end up with A and $7, worse off than you started, despite being satisfied with each transaction. That’s the cost of inconsistency.
But presumably you don’t get utility from switching as such, you get utility from having A, B, or C, so if you complete a cycle for free (without me charging you), you have exactly the same utility as when you started, and if I charge you, then when you’re back to A, you have lower utility.
If I have utility in the state of the world, as opposed to the transitions between A, B, and C, I don’t see how it’s possible for me to have cyclic preferences, unless you’re claiming that my utility doesn’t have ordinality for some reason. If that’s the sort of inconsistency in preferences you’re referring to, then yes, it’s bad, but I don’t see how ordinal utility necessitates wireheading.
Regarding inconsistent preferences, yes, that is what I’m referring to.
Ordinal utility doesn’t by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).
Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.
I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings—that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I’m well aware of the huge impact my emotional subsystem has on my decision making. However, I don’t consider it ‘me’ - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.
To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole ‘actual’ terminal value of a human is pleasure?
What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole ‘actual’ terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.
No, we don’t. That’s making recommendations as to how they can attain their preferences. That you don’t seem to understand this distinction is concerning. Instrumental and terminal values are different.
My position is in line with that—people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.
Why is my terminal value pleasure? Why should I want it to be?
Fundamentally, because pleasure feels good and preferable, and it doesn’t need anything additional (such as conditioning through social norms) to make it desirable.
Why should I desire what you describe? What’s wrong with values more complex than a single transistor?
Also, naturalistic fallacy.
It’s not a matter of what you should desire, it’s a matter of what you’d desire if you were internally consistent. Theoretically, you could have values that weren’t pleasure, such as if you couldn’t experience pleasure.
Also, the naturalistic fallacy isn’t a fallacy, because “is” and “ought” are bound together.
Why is the internal consistency of my preferences desirable, particularly if it would lead me to prefer something I am rather emphatically against?
Why should the way things are be the way things are?
(Note: Being continuously downvoted is making me reluctant to continue this discussion.)
One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other.
As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.
Retracted: Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.
I very much disagree. I think you’re couching this deontological moral stance as something more than the subjective position that it is. I find your morals abhorrent, and your normative statements regarding others’ preferences to be alarming and dangerous.
You can be Dutch booked with preferences too. If you prefer A to B, B to C, and C to A, I can make money off of you by offering a circular trade to you.
And if I’m unaware that such a strategy is taking place. Even if I was aware, I am a dynamic system evolving in time, and I might be perfectly happy with the expenditure per utility shift.
Unless I was opposed to that sort of arrangement, I find nothing wrong with that. It is my prerogative to spend resources to satisfy my preferences.
That’s exactly the problem—you’d be happy with the expenditure per shift, but every time a fill cycle would be made, you’d be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you’d end up with A and $7, worse off than you started, despite being satisfied with each transaction. That’s the cost of inconsistency.
And 3 utilons. I see no cost there.
But presumably you don’t get utility from switching as such, you get utility from having A, B, or C, so if you complete a cycle for free (without me charging you), you have exactly the same utility as when you started, and if I charge you, then when you’re back to A, you have lower utility.
If I have utility in the state of the world, as opposed to the transitions between A, B, and C, I don’t see how it’s possible for me to have cyclic preferences, unless you’re claiming that my utility doesn’t have ordinality for some reason. If that’s the sort of inconsistency in preferences you’re referring to, then yes, it’s bad, but I don’t see how ordinal utility necessitates wireheading.
Regarding inconsistent preferences, yes, that is what I’m referring to.
Ordinal utility doesn’t by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).
And you think that “desirability” in that statement refers to the utility-maximizing path?
I mean that pleasure, by its nature, feels utility-satisfying. I don’t know what you mean by “path” in “utility-maximizing path”.
Can you define ‘terminal values’, in the context of human beings?
Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.
I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings—that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I’m well aware of the huge impact my emotional subsystem has on my decision making. However, I don’t consider it ‘me’ - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.
To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole ‘actual’ terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.