I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings—that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I’m well aware of the huge impact my emotional subsystem has on my decision making. However, I don’t consider it ‘me’ - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.
To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole ‘actual’ terminal value of a human is pleasure?
What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole ‘actual’ terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.
I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings—that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I’m well aware of the huge impact my emotional subsystem has on my decision making. However, I don’t consider it ‘me’ - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure.
To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole ‘actual’ terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.