I think you have correctly noticed an empirical fact about emotions (they tend to be preferred or dispreferred by animals who experience them) but are drawing several incorrect conclusions therefrom.
First and foremost, my model of the universe leaves no room for it valuing anything. “Values” happen to be a thing possessed by thinking entities; the universe cares not one whit more for our happiness or sadness than the rules of the game of chess care whether the game is won by white or black. Values happen inside minds, they are not fundamental to the universe in any way.
Secondly, emotions are not exactly and always akin to terminal values, even if they seem to hang out together. For a counterexample to the claim “emotions are valued positively or negatively”, consider the case of curiosity, which you’ve labeled an emotional value. I don’t know about you, but I would not say that feeling curious about something “feels good”. I would almost call it a category error to even try to label the feeling as “good” or “bad”. It certainly feels good to learn something, or to gain insight, or to satisfy curiosity, but the sense of curiosity itself is neutral at best.
On top of that, I would describe myself as reflectively endorsing the process of learning for its own sake, not because of the good feeling it produces. The good feeling is a bonus. The emotion of curiosity is a useful impetus to getting the thing I actually value, insight.
I also think you’re calling something universal to humans when it really isn’t. For instance, you’re underestimating the degree to which masochists are genuinely wired differently, such that they sometimes interpret a neural pain signal that other humans would parse as “bad” as instead feeling very good. There are many similar examples where this model breaks down—for instance, in the concept of “loving to hate someone” i.e. the positive valence that comes with a feeling of righteous anger at Sauron.
I agree that there are good reasons to value the feelings of others. I’m not sure the Ship of Theseus argument is one of them, really, but I’m also not sure I fully understood your point there.
I agree that AI probably won’t feel anything. I disagree that we would expect its “soul searching” to land anywhere close to valuing human emotions. I expect AIs grown by gradient descent to end up a massive knot of conflicting values, similar to how evolution made humans a massive knot of conflicting values, but I expect the AI’s efforts to unravel this knot will land it very far away from us, if only because the space of values it is exploring is so terribly vast and the cluster of human values so terribly small in comparison. There’s no moral force that impels the AI to value things like joy or friendship; the fact that we value them is a happy accident.
I also suspect that some of the things you’re calling “material terminal values” are actually better modeled as instrumental, which is why they seem so squirrely and changeable sometimes. I value tabletop RPGs because I find them fun, and people having fun is the terminal goal (well, the main one). If tabletop RPGs stopped being fun, then I’d lose interest. I suspect something similar may be going on with valuing kinetic sculptures—I’m guessing you don’t want to tile the universe with them, you simply enjoy the process of building them.
(People change their terminal values sometimes too, especially when they notice a conflict between two or more of them, but it’s more rare. I know mine have changed somewhat.)
I think maybe the missing piece is that it’s perfectly okay to say “I value these things for their own sake” without seeking a reason that everyone else and their universe should too.
“Values” happen to be a thing possessed by thinking entities
What happens then when a non-thinking thing feels happy? Is that happiness valued? To whom? Or do you think this is impossible?
I can imagine it possible for a fetus in the womb without any thoughts, sense of self, or an ability to move, to still be capable of feeling happiness. Now try to imagine a hypothetical person with a severe mental disability preventing them having any cohesive thoughts, sense of self, or an ability to move. Could they still feel happiness? What happens when the dopamine receptors get triggered?
It is my hypothesis that the mechanism by which emotions are felt does not require a “thinking” agent. This could be false and I now see how this is an assumption which many of my arguments rely on. Thank you for catching that.
It just seems so clear to me. When I feel pain or pleasure, I don’t need to “think” about it for the emotion to be felt. I just immediately feel the pain or pleasure.
Anyway, if you assume that it is possible for a non-thinker to still be a feeler, then there is nothing logically inconceivable about a hypothetical happy rock. Then if you also say that happiness is good, and that good implies value, one must ask, who or what is valuing the happiness? The rock? The universe?
Ok maybe not “the universe” as to mean the collection of all objects within the universe. I’m more trying to say “the fabric of reality”. Like there must be some physical process by which happiness is valued. Maybe a dimension by which emotional value is expressed?
I also suspect that some of the things you’re calling “material terminal values” are actually better modeled as instrumental
You are partly correct about this. When I said I terminally value the making of kinetic sculptures, I was definitely making a simplification. I don’t value the making of all kinetic sculptures, and I also value the making of things which aren’t kinetic sculptures. I don’t, however, do it because I think it is “fun”. I can’t formally define what the actual material terminal goal is but it is something more along the lines of, “something that is challenging, and requires a certain kind of problem solving, where the solution is beautiful in some way”.
Anyway, it is often the case that the making of kinetic sculptures fits this description.
It is not true that I “simply enjoy the process of building them”. Whatever the actual definition of my goal is, I don’t want it because it is an instrumental goal to some emotion. This precisely what I am defining a material terminal goal to be. Any terminal goal which is not an emotion.
I also think you’re calling something universal to humans when it really isn’t.
I should have clarified this better. I am not saying the intensity or valence direction of emotions is universal. I am simply saying that the emotions, in general, are universally valued. Thank you for correcting me on the way masochists work. I didn’t realize they were “genuinely wired differently”. I just assumed they had some conflicting goal which made pain worth it. This doesn’t break my argument however. I would say that the masochist is not feeling pain at that point. They would be feeling some other emotion for emotions are defined by the chemical and neural processes which make them happen. Similar to how my happiness and your happiness are not the same, but they are close enough to be grouped into a word. The key piece though is that regardless, as tslarmsays, “emotions are accompanied by (or identical with, depending on definitions) valenced qualia”. They always have some value.
I agree that there are good reasons to value the feelings of others. I’m not sure the Ship of Theseus argument is one of them, really, but I’m also not sure I fully understood your point there.
Ahhh, yeah sorry that wasn’t the clearest, I was making the point that one should value the emotions of more than just other humans. Like pigs, cats, dogs, or feely blobs.
A happy blob may like feeling happy, and might even feel a desire to experience more of it, but it cannot endorse things if it doesn’t have agency. Human fulfillment and wellbeing typically involves some element of all three.
An unthinking being cannot value even its own happiness, because the concept traditionally meant by “values” refers to the goals that an agent points itself at, and an unthinking being isn’t agentic—it does not make plans to steer the world in any particular direction.
Then if you also say that happiness is good, and that good implies value, one must ask, who or what is valuing the happiness? The rock? The universe?
I am. When I say “happiness is good”, this is isomorphic with “I value happiness”. It is a statement about the directions in which I attempt to steer the world.
Like there must be some physical process by which happiness is valued. Maybe a dimension by which emotional value is expressed?
The physical process that implements “valuing happiness” is the firing of neurons in a brain. It could in theory be implemented in silicon as well, but it’s near-certainly not implemented by literal rocks.
something that is challenging, and requires a certain kind of problem solving, where the solution is beautiful in some way
Yep, that makes sense. I notice, however, that these things do not appear to be emotions. And that’s fine! It is okay to innately value things that are not emotions! Like “having a model of the world that is as accurate as possible”, i.e. truth-seeking. Many people (especially here on LW) value knowledge for its own sake. There are emotions associated with this goal, but the emotions are ancillary. There are also instrumental reasons to seek truth, but they don’t always apply. The actual goal is “improving one’s world-model” or something similar. It bottoms out there. Emotions need not apply.
The key piece though is that regardless, as tslarm says, “emotions are accompanied by (or identical with, depending on definitions) valenced qualia”. They always have some value.
First off, I’m not wholly convinced this is true. I think emotions are usually accompanied by valenced qualia, but (as with my comments about curiosity) not necessarily always. Sure, if you define “emotion” so that it excludes all possible counterexamples, then it will exclude all possible counterexamples, but also you will no longer be talking about the same concept as other people using the word “emotion”.
Second, there is an important difference between “accompanied by valenced qualia” and “has value”. There is no such thing as “inherent value”, absent a thinking being to do the evaluation. Again, things like values and goals are properties of agents; they reflect the directions in which those agents steer.
Finally, more broadly, there’s a serious problem with terminally valuing only the feeling of emotions. Imagine a future scenario: all feeling beings are wired to an enormous switchboard, which is in turn connected to their emotional processors. The switchboard causes them to feel an optimal mixture of emotions at all times (whatever you happen to think that means) and they experience nothing else. Is this a future you would endorse? Does something important seem to be missing?
Does this also mean there is no such thing as “inherent good”? If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
And if “inherent good” can exist but not “inherent value”, how would “good” be defined for it wouldn’t be allowed to use “value” in its definition.
Does this also mean there is no such thing as “inherent good”?
Yes.
If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
One can say all sorts of things. People use the phrase “X is good” to mean lots of things: “I’m cheering for X”, “I value X”, “X has consequences most people endorse”, etc. I don’t recommend we abandon the phrase, for many phrases are similarly ambiguous but still useful. I recommend keeping this ambiguity in mind, however, and disambiguating where necessary.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
I would no more describe morality as solely attempting to define objective good than I would describe physics as solely attempting to build a perpetual motion machine. Morality is also about the implications and consequences of specific values and to what extent they converge, and a great many other things. The search for “objective” good has, IMO, been a tragic distraction, but one that still occasionally bears interesting fruit by accident.
I think you have correctly noticed an empirical fact about emotions (they tend to be preferred or dispreferred by animals who experience them) but are drawing several incorrect conclusions therefrom.
First and foremost, my model of the universe leaves no room for it valuing anything. “Values” happen to be a thing possessed by thinking entities; the universe cares not one whit more for our happiness or sadness than the rules of the game of chess care whether the game is won by white or black. Values happen inside minds, they are not fundamental to the universe in any way.
Secondly, emotions are not exactly and always akin to terminal values, even if they seem to hang out together. For a counterexample to the claim “emotions are valued positively or negatively”, consider the case of curiosity, which you’ve labeled an emotional value. I don’t know about you, but I would not say that feeling curious about something “feels good”. I would almost call it a category error to even try to label the feeling as “good” or “bad”. It certainly feels good to learn something, or to gain insight, or to satisfy curiosity, but the sense of curiosity itself is neutral at best.
On top of that, I would describe myself as reflectively endorsing the process of learning for its own sake, not because of the good feeling it produces. The good feeling is a bonus. The emotion of curiosity is a useful impetus to getting the thing I actually value, insight.
I also think you’re calling something universal to humans when it really isn’t. For instance, you’re underestimating the degree to which masochists are genuinely wired differently, such that they sometimes interpret a neural pain signal that other humans would parse as “bad” as instead feeling very good. There are many similar examples where this model breaks down—for instance, in the concept of “loving to hate someone” i.e. the positive valence that comes with a feeling of righteous anger at Sauron.
I agree that there are good reasons to value the feelings of others. I’m not sure the Ship of Theseus argument is one of them, really, but I’m also not sure I fully understood your point there.
I agree that AI probably won’t feel anything. I disagree that we would expect its “soul searching” to land anywhere close to valuing human emotions. I expect AIs grown by gradient descent to end up a massive knot of conflicting values, similar to how evolution made humans a massive knot of conflicting values, but I expect the AI’s efforts to unravel this knot will land it very far away from us, if only because the space of values it is exploring is so terribly vast and the cluster of human values so terribly small in comparison. There’s no moral force that impels the AI to value things like joy or friendship; the fact that we value them is a happy accident.
I also suspect that some of the things you’re calling “material terminal values” are actually better modeled as instrumental, which is why they seem so squirrely and changeable sometimes. I value tabletop RPGs because I find them fun, and people having fun is the terminal goal (well, the main one). If tabletop RPGs stopped being fun, then I’d lose interest. I suspect something similar may be going on with valuing kinetic sculptures—I’m guessing you don’t want to tile the universe with them, you simply enjoy the process of building them.
(People change their terminal values sometimes too, especially when they notice a conflict between two or more of them, but it’s more rare. I know mine have changed somewhat.)
I think maybe the missing piece is that it’s perfectly okay to say “I value these things for their own sake” without seeking a reason that everyone else and their universe should too.
What happens then when a non-thinking thing feels happy? Is that happiness valued? To whom? Or do you think this is impossible?
I can imagine it possible for a fetus in the womb without any thoughts, sense of self, or an ability to move, to still be capable of feeling happiness. Now try to imagine a hypothetical person with a severe mental disability preventing them having any cohesive thoughts, sense of self, or an ability to move. Could they still feel happiness? What happens when the dopamine receptors get triggered?
It is my hypothesis that the mechanism by which emotions are felt does not require a “thinking” agent. This could be false and I now see how this is an assumption which many of my arguments rely on. Thank you for catching that.
It just seems so clear to me. When I feel pain or pleasure, I don’t need to “think” about it for the emotion to be felt. I just immediately feel the pain or pleasure.
Anyway, if you assume that it is possible for a non-thinker to still be a feeler, then there is nothing logically inconceivable about a hypothetical happy rock. Then if you also say that happiness is good, and that good implies value, one must ask, who or what is valuing the happiness? The rock? The universe?
Ok maybe not “the universe” as to mean the collection of all objects within the universe. I’m more trying to say “the fabric of reality”. Like there must be some physical process by which happiness is valued. Maybe a dimension by which emotional value is expressed?
You are partly correct about this. When I said I terminally value the making of kinetic sculptures, I was definitely making a simplification. I don’t value the making of all kinetic sculptures, and I also value the making of things which aren’t kinetic sculptures. I don’t, however, do it because I think it is “fun”. I can’t formally define what the actual material terminal goal is but it is something more along the lines of, “something that is challenging, and requires a certain kind of problem solving, where the solution is beautiful in some way”.
Anyway, it is often the case that the making of kinetic sculptures fits this description.
It is not true that I “simply enjoy the process of building them”. Whatever the actual definition of my goal is, I don’t want it because it is an instrumental goal to some emotion. This precisely what I am defining a material terminal goal to be. Any terminal goal which is not an emotion.
I should have clarified this better. I am not saying the intensity or valence direction of emotions is universal. I am simply saying that the emotions, in general, are universally valued. Thank you for correcting me on the way masochists work. I didn’t realize they were “genuinely wired differently”. I just assumed they had some conflicting goal which made pain worth it. This doesn’t break my argument however. I would say that the masochist is not feeling pain at that point. They would be feeling some other emotion for emotions are defined by the chemical and neural processes which make them happen. Similar to how my happiness and your happiness are not the same, but they are close enough to be grouped into a word. The key piece though is that regardless, as tslarm says, “emotions are accompanied by (or identical with, depending on definitions) valenced qualia”. They always have some value.
Ahhh, yeah sorry that wasn’t the clearest, I was making the point that one should value the emotions of more than just other humans. Like pigs, cats, dogs, or feely blobs.
When a baby feels happy, it feels happy. Nothing else happens.
There are differences among wanting, liking, and endorsing something.
A happy blob may like feeling happy, and might even feel a desire to experience more of it, but it cannot endorse things if it doesn’t have agency. Human fulfillment and wellbeing typically involves some element of all three.
An unthinking being cannot value even its own happiness, because the concept traditionally meant by “values” refers to the goals that an agent points itself at, and an unthinking being isn’t agentic—it does not make plans to steer the world in any particular direction.
I am. When I say “happiness is good”, this is isomorphic with “I value happiness”. It is a statement about the directions in which I attempt to steer the world.
The physical process that implements “valuing happiness” is the firing of neurons in a brain. It could in theory be implemented in silicon as well, but it’s near-certainly not implemented by literal rocks.
Yep, that makes sense. I notice, however, that these things do not appear to be emotions. And that’s fine! It is okay to innately value things that are not emotions! Like “having a model of the world that is as accurate as possible”, i.e. truth-seeking. Many people (especially here on LW) value knowledge for its own sake. There are emotions associated with this goal, but the emotions are ancillary. There are also instrumental reasons to seek truth, but they don’t always apply. The actual goal is “improving one’s world-model” or something similar. It bottoms out there. Emotions need not apply.
First off, I’m not wholly convinced this is true. I think emotions are usually accompanied by valenced qualia, but (as with my comments about curiosity) not necessarily always. Sure, if you define “emotion” so that it excludes all possible counterexamples, then it will exclude all possible counterexamples, but also you will no longer be talking about the same concept as other people using the word “emotion”.
Second, there is an important difference between “accompanied by valenced qualia” and “has value”. There is no such thing as “inherent value”, absent a thinking being to do the evaluation. Again, things like values and goals are properties of agents; they reflect the directions in which those agents steer.
Finally, more broadly, there’s a serious problem with terminally valuing only the feeling of emotions. Imagine a future scenario: all feeling beings are wired to an enormous switchboard, which is in turn connected to their emotional processors. The switchboard causes them to feel an optimal mixture of emotions at all times (whatever you happen to think that means) and they experience nothing else. Is this a future you would endorse? Does something important seem to be missing?
Does this also mean there is no such thing as “inherent good”? If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
And if “inherent good” can exist but not “inherent value”, how would “good” be defined for it wouldn’t be allowed to use “value” in its definition.
Yes.
One can say all sorts of things. People use the phrase “X is good” to mean lots of things: “I’m cheering for X”, “I value X”, “X has consequences most people endorse”, etc. I don’t recommend we abandon the phrase, for many phrases are similarly ambiguous but still useful. I recommend keeping this ambiguity in mind, however, and disambiguating where necessary.
I would no more describe morality as solely attempting to define objective good than I would describe physics as solely attempting to build a perpetual motion machine. Morality is also about the implications and consequences of specific values and to what extent they converge, and a great many other things. The search for “objective” good has, IMO, been a tragic distraction, but one that still occasionally bears interesting fruit by accident.