I get the analogy. And I guess I’d agree that I value more complex positive emotions that are intertwined with the world more than sort of one note ones. (E.g. being on molly felt nice but kind of empty.)
But I don’t think there’s much intrinsic value in the world other than the experiences of sentient beings.
A cold and lifeless universe seems not that valuable. And if the universe has life I want those beings to be happy, all else equal. What do you want?
And regarding the evolutionary perspective, what do I care what’s fit or not? My utility function is not inclusive genetic fitness.
Experiences of sentient beings are valuable, but have to be “about” something to properly be experiences, rather than, say, imagination.
I would rather that conditions in the universe are good for the lifeforms, and that the lifeforms’ emotions track the situation, such that the lifeforms are happy. But if the universe is bad, then it’s better (IMO) for the lifeforms to be sad about that.
The issue with evolution is that it’s a puzzle that evolution would create animals that try to wirehead themselves, it’s not a moral argument against wireheading.
I would rather that conditions in the universe are good for the lifeforms
How do you measure this? What does it mean that conditions in the universe are good for the lifeforms other than that it gives them good experiences?
You’re wanting to ground positive emotions in objectively good states. But I’m wanting to ground the goodness of states in the positive emotions they produce.
Perhaps there’s some reflexivity here, where we both evaluate positive emotions based on how well they track reality, and we also evaluate reality on how much it produces positive emotions. But we need some way for it to bottom out.
For me, I would think positive emotions are more fundamentally good than universe states, so that seems like a safer place to ground the recursion. But I’m curious if you’ve got another view.
I don’t have a great theory here, but some pointers at non-hedonic values are:
“Wanting” as a separate thing from “liking”; what is planned/steered towards, versus what affective states are generated? See this. In a literal sense, people don’t very much want to be happy.
It’s common to speak in terms of “mental functions”, e.g. perception and planning. The mind has a sort of “telos”/direction, which is not primarily towards maximizing happiness (if it were, we’d be happier); rather, the happiness signal has a function as part of the mind’s functioning.
The desire to not be deceived, or to be correct, requires a correspondence between states of mind and objective states. To be deceived about, say, which mathematical results are true/interesting, means to explore a much more impoverished space of mathematical reasoning, than one could by having intact mathematical judgment.
Related to deception, social emotions are referential: they refer to other beings. The emotion can be present without the other beings existing, but this is a case of deception. Living in a simulation in which all apparent intelligent beings are actually (convincing) nonsentient robots seems undesirable.
Desire for variety. Having the same happy mind replicated everywhere is unsatisfying compared to having a diversity of mental states being explored. Perhaps you could erase your memory so you could re-experience the same great movie/art/whatever repeatedly, but would you want to?
Relatedly, the best art integrates positive and negative emotions. Having only positive emotions is like painting using only warm colors.
In epistemic matters we accept that beliefs about what is true may be wrong, in the sense that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. Similarly, we may accept that beliefs about the quality of one’s experience may be wrong, in that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. There has to be a starting point for investigation (as there is in epistemic matters), which might or might not be hedonic, but coherence criteria and so on will modify the starting point.
I suspect that some of my opinions here are influenced by certain meditative experiences that reduce the degree to which experiential valence seems important, in comparison to variety, coherence, and functionality.
I get the analogy. And I guess I’d agree that I value more complex positive emotions that are intertwined with the world more than sort of one note ones. (E.g. being on molly felt nice but kind of empty.)
But I don’t think there’s much intrinsic value in the world other than the experiences of sentient beings.
A cold and lifeless universe seems not that valuable. And if the universe has life I want those beings to be happy, all else equal. What do you want?
And regarding the evolutionary perspective, what do I care what’s fit or not? My utility function is not inclusive genetic fitness.
Experiences of sentient beings are valuable, but have to be “about” something to properly be experiences, rather than, say, imagination.
I would rather that conditions in the universe are good for the lifeforms, and that the lifeforms’ emotions track the situation, such that the lifeforms are happy. But if the universe is bad, then it’s better (IMO) for the lifeforms to be sad about that.
The issue with evolution is that it’s a puzzle that evolution would create animals that try to wirehead themselves, it’s not a moral argument against wireheading.
How do you measure this? What does it mean that conditions in the universe are good for the lifeforms other than that it gives them good experiences?
You’re wanting to ground positive emotions in objectively good states. But I’m wanting to ground the goodness of states in the positive emotions they produce.
Perhaps there’s some reflexivity here, where we both evaluate positive emotions based on how well they track reality, and we also evaluate reality on how much it produces positive emotions. But we need some way for it to bottom out.
For me, I would think positive emotions are more fundamentally good than universe states, so that seems like a safer place to ground the recursion. But I’m curious if you’ve got another view.
I don’t have a great theory here, but some pointers at non-hedonic values are:
“Wanting” as a separate thing from “liking”; what is planned/steered towards, versus what affective states are generated? See this. In a literal sense, people don’t very much want to be happy.
It’s common to speak in terms of “mental functions”, e.g. perception and planning. The mind has a sort of “telos”/direction, which is not primarily towards maximizing happiness (if it were, we’d be happier); rather, the happiness signal has a function as part of the mind’s functioning.
The desire to not be deceived, or to be correct, requires a correspondence between states of mind and objective states. To be deceived about, say, which mathematical results are true/interesting, means to explore a much more impoverished space of mathematical reasoning, than one could by having intact mathematical judgment.
Related to deception, social emotions are referential: they refer to other beings. The emotion can be present without the other beings existing, but this is a case of deception. Living in a simulation in which all apparent intelligent beings are actually (convincing) nonsentient robots seems undesirable.
Desire for variety. Having the same happy mind replicated everywhere is unsatisfying compared to having a diversity of mental states being explored. Perhaps you could erase your memory so you could re-experience the same great movie/art/whatever repeatedly, but would you want to?
Relatedly, the best art integrates positive and negative emotions. Having only positive emotions is like painting using only warm colors.
In epistemic matters we accept that beliefs about what is true may be wrong, in the sense that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. Similarly, we may accept that beliefs about the quality of one’s experience may be wrong, in that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. There has to be a starting point for investigation (as there is in epistemic matters), which might or might not be hedonic, but coherence criteria and so on will modify the starting point.
I suspect that some of my opinions here are influenced by certain meditative experiences that reduce the degree to which experiential valence seems important, in comparison to variety, coherence, and functionality.