A happy blob may like feeling happy, and might even feel a desire to experience more of it, but it cannot endorse things if it doesn’t have agency. Human fulfillment and wellbeing typically involves some element of all three.
An unthinking being cannot value even its own happiness, because the concept traditionally meant by “values” refers to the goals that an agent points itself at, and an unthinking being isn’t agentic—it does not make plans to steer the world in any particular direction.
Then if you also say that happiness is good, and that good implies value, one must ask, who or what is valuing the happiness? The rock? The universe?
I am. When I say “happiness is good”, this is isomorphic with “I value happiness”. It is a statement about the directions in which I attempt to steer the world.
Like there must be some physical process by which happiness is valued. Maybe a dimension by which emotional value is expressed?
The physical process that implements “valuing happiness” is the firing of neurons in a brain. It could in theory be implemented in silicon as well, but it’s near-certainly not implemented by literal rocks.
something that is challenging, and requires a certain kind of problem solving, where the solution is beautiful in some way
Yep, that makes sense. I notice, however, that these things do not appear to be emotions. And that’s fine! It is okay to innately value things that are not emotions! Like “having a model of the world that is as accurate as possible”, i.e. truth-seeking. Many people (especially here on LW) value knowledge for its own sake. There are emotions associated with this goal, but the emotions are ancillary. There are also instrumental reasons to seek truth, but they don’t always apply. The actual goal is “improving one’s world-model” or something similar. It bottoms out there. Emotions need not apply.
The key piece though is that regardless, as tslarm says, “emotions are accompanied by (or identical with, depending on definitions) valenced qualia”. They always have some value.
First off, I’m not wholly convinced this is true. I think emotions are usually accompanied by valenced qualia, but (as with my comments about curiosity) not necessarily always. Sure, if you define “emotion” so that it excludes all possible counterexamples, then it will exclude all possible counterexamples, but also you will no longer be talking about the same concept as other people using the word “emotion”.
Second, there is an important difference between “accompanied by valenced qualia” and “has value”. There is no such thing as “inherent value”, absent a thinking being to do the evaluation. Again, things like values and goals are properties of agents; they reflect the directions in which those agents steer.
Finally, more broadly, there’s a serious problem with terminally valuing only the feeling of emotions. Imagine a future scenario: all feeling beings are wired to an enormous switchboard, which is in turn connected to their emotional processors. The switchboard causes them to feel an optimal mixture of emotions at all times (whatever you happen to think that means) and they experience nothing else. Is this a future you would endorse? Does something important seem to be missing?
Does this also mean there is no such thing as “inherent good”? If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
And if “inherent good” can exist but not “inherent value”, how would “good” be defined for it wouldn’t be allowed to use “value” in its definition.
Does this also mean there is no such thing as “inherent good”?
Yes.
If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
One can say all sorts of things. People use the phrase “X is good” to mean lots of things: “I’m cheering for X”, “I value X”, “X has consequences most people endorse”, etc. I don’t recommend we abandon the phrase, for many phrases are similarly ambiguous but still useful. I recommend keeping this ambiguity in mind, however, and disambiguating where necessary.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
I would no more describe morality as solely attempting to define objective good than I would describe physics as solely attempting to build a perpetual motion machine. Morality is also about the implications and consequences of specific values and to what extent they converge, and a great many other things. The search for “objective” good has, IMO, been a tragic distraction, but one that still occasionally bears interesting fruit by accident.
When a baby feels happy, it feels happy. Nothing else happens.
There are differences among wanting, liking, and endorsing something.
A happy blob may like feeling happy, and might even feel a desire to experience more of it, but it cannot endorse things if it doesn’t have agency. Human fulfillment and wellbeing typically involves some element of all three.
An unthinking being cannot value even its own happiness, because the concept traditionally meant by “values” refers to the goals that an agent points itself at, and an unthinking being isn’t agentic—it does not make plans to steer the world in any particular direction.
I am. When I say “happiness is good”, this is isomorphic with “I value happiness”. It is a statement about the directions in which I attempt to steer the world.
The physical process that implements “valuing happiness” is the firing of neurons in a brain. It could in theory be implemented in silicon as well, but it’s near-certainly not implemented by literal rocks.
Yep, that makes sense. I notice, however, that these things do not appear to be emotions. And that’s fine! It is okay to innately value things that are not emotions! Like “having a model of the world that is as accurate as possible”, i.e. truth-seeking. Many people (especially here on LW) value knowledge for its own sake. There are emotions associated with this goal, but the emotions are ancillary. There are also instrumental reasons to seek truth, but they don’t always apply. The actual goal is “improving one’s world-model” or something similar. It bottoms out there. Emotions need not apply.
First off, I’m not wholly convinced this is true. I think emotions are usually accompanied by valenced qualia, but (as with my comments about curiosity) not necessarily always. Sure, if you define “emotion” so that it excludes all possible counterexamples, then it will exclude all possible counterexamples, but also you will no longer be talking about the same concept as other people using the word “emotion”.
Second, there is an important difference between “accompanied by valenced qualia” and “has value”. There is no such thing as “inherent value”, absent a thinking being to do the evaluation. Again, things like values and goals are properties of agents; they reflect the directions in which those agents steer.
Finally, more broadly, there’s a serious problem with terminally valuing only the feeling of emotions. Imagine a future scenario: all feeling beings are wired to an enormous switchboard, which is in turn connected to their emotional processors. The switchboard causes them to feel an optimal mixture of emotions at all times (whatever you happen to think that means) and they experience nothing else. Is this a future you would endorse? Does something important seem to be missing?
Does this also mean there is no such thing as “inherent good”? If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
And if “inherent good” can exist but not “inherent value”, how would “good” be defined for it wouldn’t be allowed to use “value” in its definition.
Yes.
One can say all sorts of things. People use the phrase “X is good” to mean lots of things: “I’m cheering for X”, “I value X”, “X has consequences most people endorse”, etc. I don’t recommend we abandon the phrase, for many phrases are similarly ambiguous but still useful. I recommend keeping this ambiguity in mind, however, and disambiguating where necessary.
I would no more describe morality as solely attempting to define objective good than I would describe physics as solely attempting to build a perpetual motion machine. Morality is also about the implications and consequences of specific values and to what extent they converge, and a great many other things. The search for “objective” good has, IMO, been a tragic distraction, but one that still occasionally bears interesting fruit by accident.