People often believe that it’s inherently good to be happy, rather than thinking that their happiness level should track the actual state of affairs (and thus be a useful tool for emotional processing and communication). Why?
Isn’t your happiness level one of the most important parts of the “actual state of affairs”? How would you measure the value of the actual state of affairs other than according to how it affects your (or others’) happiness?
It seems to me that it is inherently good to be happy. All else equal, being happier is better.
That said, I agree that it’s good to pay a cost in temporarily lower happiness (e.g. for emotional processing, etc) to achiever more happiness later. If that’s all you mean—that the optimal strategy allows for temporary unhappiness, and it’s unwise to try to force yourself or others to be happy in all moments—then I don’t disagree.
“Isn’t the score I get in the game I’m playing one of the most important part of the ‘actual state of affairs’? How would you measure the value of the actual state of affairs other than according to how it affects your (or others’) scores?”
I’m not sure if this analogy is, by itself, convincing. But, it’s suggestive, in that happiness is a simple, scalar-like thing, and it would be strange for such a simple thing to have a high degree of intrinsic value. Rather, on a broad perspective, it would seem that those things of most intrinsic value are those things that are computationally interesting, which can explore and cohere different sources of information, etc, rather than very simple scalars. (Of course, scalars can offer information about other things)
On an evolutionary account, why would it be fit for an organism care about a scalar quantity, except in that that quantity is correlated with the organism’s fitness? It would seem that wireheading is a bug, from a design perspective.
I get the analogy. And I guess I’d agree that I value more complex positive emotions that are intertwined with the world more than sort of one note ones. (E.g. being on molly felt nice but kind of empty.)
But I don’t think there’s much intrinsic value in the world other than the experiences of sentient beings.
A cold and lifeless universe seems not that valuable. And if the universe has life I want those beings to be happy, all else equal. What do you want?
And regarding the evolutionary perspective, what do I care what’s fit or not? My utility function is not inclusive genetic fitness.
Experiences of sentient beings are valuable, but have to be “about” something to properly be experiences, rather than, say, imagination.
I would rather that conditions in the universe are good for the lifeforms, and that the lifeforms’ emotions track the situation, such that the lifeforms are happy. But if the universe is bad, then it’s better (IMO) for the lifeforms to be sad about that.
The issue with evolution is that it’s a puzzle that evolution would create animals that try to wirehead themselves, it’s not a moral argument against wireheading.
I would rather that conditions in the universe are good for the lifeforms
How do you measure this? What does it mean that conditions in the universe are good for the lifeforms other than that it gives them good experiences?
You’re wanting to ground positive emotions in objectively good states. But I’m wanting to ground the goodness of states in the positive emotions they produce.
Perhaps there’s some reflexivity here, where we both evaluate positive emotions based on how well they track reality, and we also evaluate reality on how much it produces positive emotions. But we need some way for it to bottom out.
For me, I would think positive emotions are more fundamentally good than universe states, so that seems like a safer place to ground the recursion. But I’m curious if you’ve got another view.
I don’t have a great theory here, but some pointers at non-hedonic values are:
“Wanting” as a separate thing from “liking”; what is planned/steered towards, versus what affective states are generated? See this. In a literal sense, people don’t very much want to be happy.
It’s common to speak in terms of “mental functions”, e.g. perception and planning. The mind has a sort of “telos”/direction, which is not primarily towards maximizing happiness (if it were, we’d be happier); rather, the happiness signal has a function as part of the mind’s functioning.
The desire to not be deceived, or to be correct, requires a correspondence between states of mind and objective states. To be deceived about, say, which mathematical results are true/interesting, means to explore a much more impoverished space of mathematical reasoning, than one could by having intact mathematical judgment.
Related to deception, social emotions are referential: they refer to other beings. The emotion can be present without the other beings existing, but this is a case of deception. Living in a simulation in which all apparent intelligent beings are actually (convincing) nonsentient robots seems undesirable.
Desire for variety. Having the same happy mind replicated everywhere is unsatisfying compared to having a diversity of mental states being explored. Perhaps you could erase your memory so you could re-experience the same great movie/art/whatever repeatedly, but would you want to?
Relatedly, the best art integrates positive and negative emotions. Having only positive emotions is like painting using only warm colors.
In epistemic matters we accept that beliefs about what is true may be wrong, in the sense that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. Similarly, we may accept that beliefs about the quality of one’s experience may be wrong, in that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. There has to be a starting point for investigation (as there is in epistemic matters), which might or might not be hedonic, but coherence criteria and so on will modify the starting point.
I suspect that some of my opinions here are influenced by certain meditative experiences that reduce the degree to which experiential valence seems important, in comparison to variety, coherence, and functionality.
Isn’t your happiness level one of the most important parts of the “actual state of affairs”? How would you measure the value of the actual state of affairs other than according to how it affects your (or others’) happiness?
It seems to me that it is inherently good to be happy. All else equal, being happier is better.
That said, I agree that it’s good to pay a cost in temporarily lower happiness (e.g. for emotional processing, etc) to achiever more happiness later. If that’s all you mean—that the optimal strategy allows for temporary unhappiness, and it’s unwise to try to force yourself or others to be happy in all moments—then I don’t disagree.
“Isn’t the score I get in the game I’m playing one of the most important part of the ‘actual state of affairs’? How would you measure the value of the actual state of affairs other than according to how it affects your (or others’) scores?”
I’m not sure if this analogy is, by itself, convincing. But, it’s suggestive, in that happiness is a simple, scalar-like thing, and it would be strange for such a simple thing to have a high degree of intrinsic value. Rather, on a broad perspective, it would seem that those things of most intrinsic value are those things that are computationally interesting, which can explore and cohere different sources of information, etc, rather than very simple scalars. (Of course, scalars can offer information about other things)
On an evolutionary account, why would it be fit for an organism care about a scalar quantity, except in that that quantity is correlated with the organism’s fitness? It would seem that wireheading is a bug, from a design perspective.
I get the analogy. And I guess I’d agree that I value more complex positive emotions that are intertwined with the world more than sort of one note ones. (E.g. being on molly felt nice but kind of empty.)
But I don’t think there’s much intrinsic value in the world other than the experiences of sentient beings.
A cold and lifeless universe seems not that valuable. And if the universe has life I want those beings to be happy, all else equal. What do you want?
And regarding the evolutionary perspective, what do I care what’s fit or not? My utility function is not inclusive genetic fitness.
Experiences of sentient beings are valuable, but have to be “about” something to properly be experiences, rather than, say, imagination.
I would rather that conditions in the universe are good for the lifeforms, and that the lifeforms’ emotions track the situation, such that the lifeforms are happy. But if the universe is bad, then it’s better (IMO) for the lifeforms to be sad about that.
The issue with evolution is that it’s a puzzle that evolution would create animals that try to wirehead themselves, it’s not a moral argument against wireheading.
How do you measure this? What does it mean that conditions in the universe are good for the lifeforms other than that it gives them good experiences?
You’re wanting to ground positive emotions in objectively good states. But I’m wanting to ground the goodness of states in the positive emotions they produce.
Perhaps there’s some reflexivity here, where we both evaluate positive emotions based on how well they track reality, and we also evaluate reality on how much it produces positive emotions. But we need some way for it to bottom out.
For me, I would think positive emotions are more fundamentally good than universe states, so that seems like a safer place to ground the recursion. But I’m curious if you’ve got another view.
I don’t have a great theory here, but some pointers at non-hedonic values are:
“Wanting” as a separate thing from “liking”; what is planned/steered towards, versus what affective states are generated? See this. In a literal sense, people don’t very much want to be happy.
It’s common to speak in terms of “mental functions”, e.g. perception and planning. The mind has a sort of “telos”/direction, which is not primarily towards maximizing happiness (if it were, we’d be happier); rather, the happiness signal has a function as part of the mind’s functioning.
The desire to not be deceived, or to be correct, requires a correspondence between states of mind and objective states. To be deceived about, say, which mathematical results are true/interesting, means to explore a much more impoverished space of mathematical reasoning, than one could by having intact mathematical judgment.
Related to deception, social emotions are referential: they refer to other beings. The emotion can be present without the other beings existing, but this is a case of deception. Living in a simulation in which all apparent intelligent beings are actually (convincing) nonsentient robots seems undesirable.
Desire for variety. Having the same happy mind replicated everywhere is unsatisfying compared to having a diversity of mental states being explored. Perhaps you could erase your memory so you could re-experience the same great movie/art/whatever repeatedly, but would you want to?
Relatedly, the best art integrates positive and negative emotions. Having only positive emotions is like painting using only warm colors.
In epistemic matters we accept that beliefs about what is true may be wrong, in the sense that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. Similarly, we may accept that beliefs about the quality of one’s experience may be wrong, in that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. There has to be a starting point for investigation (as there is in epistemic matters), which might or might not be hedonic, but coherence criteria and so on will modify the starting point.
I suspect that some of my opinions here are influenced by certain meditative experiences that reduce the degree to which experiential valence seems important, in comparison to variety, coherence, and functionality.