The funny thing is you probably don’t even know what happiness is. Do you not value pleasure, contentment, joy, or satisfaction? None of these things might even turn out to be single things on closer inspection (like Jade).
I don’t know exactly what happiness is, but I’m pretty certain it’s something like the partial derivative of desires with respect to beliefs i.e. you’re happy if you start wanting what’s going on more. It might be the dot product of desires and beliefs i.e. you believe your desires are fulfilled.
You sure about that? You could be sure, but lets say you I told you that in 5 years you would become demented. This dementia would not make you unhappy, in fact it would make you slightly happier and your condition would not make any person unhappier. A very artificial situation but still. Would you still consider it a good thing that you would become demented?
The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia, but that’s a statement of my desires, not my moral beliefs. Morally, dementia would be better.
The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia
If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?
but that’s a statement of my desires, not my moral beliefs. Morally, dementia would be better.
By Morally I assume that you mean something you should do? But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness?
(I scenically my questions are not conceived as utter nonsense.)
If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?
I think so. I’m not certain why.
But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness?
It’s better because I’m happier. It might be somewhat bad for Present!me (who feels bad making that decision), but I assume Future!me’s happiness will make up for that.
and how come you deem it morally to care about others happiness?
There’s nothing moral about caring about others’ happiness. It’s their happiness itself that is moral. Happiness is good.
I suspect a lot of people consider liberty important because they like it. I don’t. I very much prefer my choices being made for me. If someone gave me more freedom, I wouldn’t like that. Could they really be said to be doing me a favor? Or is it better that way, and the fact that I’m against it doesn’t matter?
Excellence as in, “prowess”, “capability”, “competence”, “skillfulness”, “strength”.
I very much prefer my choices being made for me. [...] If someone gave me more freedom, I wouldn’t like that. Could they really be said to be doing me a favor?
Would you agree that while you would prefer to have your choices made for you, you would strongly prefer to have some say in who makes those choices?
I ask this question as a way to attempt to reveal that we’re focusing on two different things with the notion of ‘freedom’. You associate “freedom” with “range of choices”. I associate “freedom” with “range of outcomes”. Normally, these are indistinguishable from one another. But there are practical cases where they aren’t. For example: a voluntary slave need only make one choice: who is his master?
I ask this question as a way to attempt to reveal that we’re focusing on two different things with the notion of ‘freedom’. You associate “freedom” with “range of choices”. I associate “freedom” with “range of outcomes”.
Wow I don’t know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!
But then have to ask: by range of outcomes do you mean expected range of outcomes or genuine range of outcomes (real in the sens that not even Laplace’s demon could know the outcome for sure?.)
Wow I don’t know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!
That’s rather interesting, since I myself am a compatibilist and a physicalist. My phrasing was not meant to be an argument for libertinism over compatibilism / determinism, and in fact the definition of freedom as being associated with a greater range of available outcomes is entirely compatible with, well, compatibilism.
(real in the sens that not even Laplace’s demon could know the outcome for sure?.)
I do not ascribe to the notion that the universe is wholly deterministic anyhow, so Laplace’s demon would simply be too confused… although maybe he’ll know something we don’t.
to answer you more directly, I don’t know that there’s a material difference between “expected range of outcomes” and “genuine range of outcomes”, as I was speaking in the abstract anyhow.
I’d want it to be someone who makes good choices, since that will make me happier. Other than that, choosing who is just another choice I’d wish to avoid.
I don’t want a range of outcomes. I want a good outcome.
Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds? If freedom did make me happy, I’d just talk about a hypothetical person who preferred slavery. I already told you I only find happiness morally relevant.
I don’t want a range of outcomes. I want a good outcome.
These are synonymous when we must remain agnostic as to what each individual would select as a “good outcome” for his or her self.
Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds?
No. My argument is one of practical utility, not of moral virtue. If we expand universally the range of available outcomes then the number of “good outcomes” increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.
If we expand universally the range of available outcomes then the number of “good outcomes” increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.
Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?
Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?
I would be more inclined to say that if you prefer to be happy then you should have the freedom—the option—to be happy.
So I don’t know that we agree on that—as I would not prefer to be “happy” (in fact, I worry very much about becoming content and as a result sliding into complacency; I believe dissatisfaction with the now is an integral element of what makes me personally a “worthwhile” human being) -- but I do know that my belief in freedom as currently expressed means that just because I want to be one way does not mean that I am asserting that all people should wind up like me.
Diversity of individual outcomes in order to allow individuals to seek out and obtain their individual preferences (in a manner that does not directly impede the ability of others to do the same) is (or is close to) an intrinsic good.
So, freedom is an instrumental value, but happiness is not the terminal value?
I’m not sure that the mere fact that something is a terminal value prevents it from also being an instrumental value. Perhaps I might agree with the notion that “maintaining high instrumental value is a terminal value”—though I haven’t really put deep thought into that one. I’ll have to consider it.
It sounds like your terminal value is preference fulfillment or something to that extent.
Edit: Whoops. I didn’t notice that you weren’t the person I was originally talking to.
The Link is irrelevant. It’s about instrumental values. I was talking about terminal values. I’m not sure what Logos01 was talking about, but if it is instrumental values, this isn’t so much a debate as a mutual misunderstanding, and not much is relevant.
How are happiness and unhappiness weighed against each other, to become a single value?
I consider unhappiness negative happiness. If you want to do what you’re currently doing more, you’re happy. The more it makes you want to do it, the happier you are. If it makes you want to do it more by a negative amount, it’s negative happiness.
For what it’s worth, I value happiness alone (though not my happiness in particular).
The funny thing is you probably don’t even know what happiness is. Do you not value pleasure, contentment, joy, or satisfaction? None of these things might even turn out to be single things on closer inspection (like Jade).
I don’t understand.
I don’t know exactly what happiness is, but I’m pretty certain it’s something like the partial derivative of desires with respect to beliefs i.e. you’re happy if you start wanting what’s going on more. It might be the dot product of desires and beliefs i.e. you believe your desires are fulfilled.
You sure about that? You could be sure, but lets say you I told you that in 5 years you would become demented. This dementia would not make you unhappy, in fact it would make you slightly happier and your condition would not make any person unhappier. A very artificial situation but still. Would you still consider it a good thing that you would become demented?
The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia, but that’s a statement of my desires, not my moral beliefs. Morally, dementia would be better.
If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?
By Morally I assume that you mean something you should do? But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness? (I scenically my questions are not conceived as utter nonsense.)
I think so. I’m not certain why.
It’s better because I’m happier. It might be somewhat bad for Present!me (who feels bad making that decision), but I assume Future!me’s happiness will make up for that.
There’s nothing moral about caring about others’ happiness. It’s their happiness itself that is moral. Happiness is good.
So what’s your response to the pill question?
I’d take a pill to make me happy. The exact kind of joy is irrelevant.
I prefer to place value on liberty and excellence over happiness. That is; the breadth of available options for individuals to self-determine.
I would rather be twice as capable while half as happy than I would be twice as happy while half as capable.
Excellence?
I suspect a lot of people consider liberty important because they like it. I don’t. I very much prefer my choices being made for me. If someone gave me more freedom, I wouldn’t like that. Could they really be said to be doing me a favor? Or is it better that way, and the fact that I’m against it doesn’t matter?
Excellence as in, “prowess”, “capability”, “competence”, “skillfulness”, “strength”.
Would you agree that while you would prefer to have your choices made for you, you would strongly prefer to have some say in who makes those choices?
I ask this question as a way to attempt to reveal that we’re focusing on two different things with the notion of ‘freedom’. You associate “freedom” with “range of choices”. I associate “freedom” with “range of outcomes”. Normally, these are indistinguishable from one another. But there are practical cases where they aren’t. For example: a voluntary slave need only make one choice: who is his master?
Wow I don’t know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!
But then have to ask: by range of outcomes do you mean expected range of outcomes or genuine range of outcomes (real in the sens that not even Laplace’s demon could know the outcome for sure?.)
That’s rather interesting, since I myself am a compatibilist and a physicalist. My phrasing was not meant to be an argument for libertinism over compatibilism / determinism, and in fact the definition of freedom as being associated with a greater range of available outcomes is entirely compatible with, well, compatibilism.
I do not ascribe to the notion that the universe is wholly deterministic anyhow, so Laplace’s demon would simply be too confused… although maybe he’ll know something we don’t.
to answer you more directly, I don’t know that there’s a material difference between “expected range of outcomes” and “genuine range of outcomes”, as I was speaking in the abstract anyhow.
But then what is the difference between “range of choices” and “expected range of outcomes”?
I get it! I’m a bit slow sometimes. Love the comic by the way!
I’d want it to be someone who makes good choices, since that will make me happier. Other than that, choosing who is just another choice I’d wish to avoid.
I don’t want a range of outcomes. I want a good outcome.
Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds? If freedom did make me happy, I’d just talk about a hypothetical person who preferred slavery. I already told you I only find happiness morally relevant.
These are synonymous when we must remain agnostic as to what each individual would select as a “good outcome” for his or her self.
No. My argument is one of practical utility, not of moral virtue. If we expand universally the range of available outcomes then the number of “good outcomes” increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.
Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?
I would be more inclined to say that if you prefer to be happy then you should have the freedom—the option—to be happy.
So I don’t know that we agree on that—as I would not prefer to be “happy” (in fact, I worry very much about becoming content and as a result sliding into complacency; I believe dissatisfaction with the now is an integral element of what makes me personally a “worthwhile” human being) -- but I do know that my belief in freedom as currently expressed means that just because I want to be one way does not mean that I am asserting that all people should wind up like me.
Diversity of individual outcomes in order to allow individuals to seek out and obtain their individual preferences (in a manner that does not directly impede the ability of others to do the same) is (or is close to) an intrinsic good.
So, freedom is an instrumental value, but happiness is not the terminal value?
It sounds like your terminal value is preference fulfillment or something to that extent.
I’m not sure that the mere fact that something is a terminal value prevents it from also being an instrumental value. Perhaps I might agree with the notion that “maintaining high instrumental value is a terminal value”—though I haven’t really put deep thought into that one. I’ll have to consider it.
Passively, yes.
Possibly relevant
Is that a yes?
Edit: Whoops. I didn’t notice that you weren’t the person I was originally talking to.
The Link is irrelevant. It’s about instrumental values. I was talking about terminal values. I’m not sure what Logos01 was talking about, but if it is instrumental values, this isn’t so much a debate as a mutual misunderstanding, and not much is relevant.
How are happiness and unhappiness weighed against each other, to become a single value?
Is there a strict boundary between emotions, or a sliding scale among them all?
I consider unhappiness negative happiness. If you want to do what you’re currently doing more, you’re happy. The more it makes you want to do it, the happier you are. If it makes you want to do it more by a negative amount, it’s negative happiness.