There is no such thing as “inherent value”
Does this also mean there is no such thing as “inherent good”? If so, then one cannot say, “X is good”, they would have to say “I think that X is good”, for “good” would be a fact of their mind, not the environment.
This is what I thought the whole field of morality is about. Defining what is “good” in an objective fundamental sense.
And if “inherent good” can exist but not “inherent value”, how would “good” be defined for it wouldn’t be allowed to use “value” in its definition.
Wow, thank you so much. This is a lens I totally hadn’t considered.
You can see in the post how I was confused how evolution played a part in “imbuing” material terminal goals into humans. I was like, “but kinetic sculptures were not in the ancestral environment?”
It sounds like rather than imbuing humans with material goals, it has imbued a process by which humans create their own.
I would still define material goals as simply terminal goals which are not defined by some qualia, but it is fascinating that this is what material goals look like in humans.
This also, as you say, makes it harder to distinguish between emotional and material goals in humans, since our material goals are ultimately emotionally derived. In particular, it makes it difficult to distinguish between an instrumental goal to an emotional terminal goal, and a learned material goal created from reinforced prediction of its expected emotional reward.
E.g. the difference between someone wanting a cookie because it will make them feel good, and someone wanting money as a terminal goal because their brain frequently predicted that money would lead to feeling good.
I still make this distinction between material and emotional goals because this isn’t the only way that material goals play out among all agents. For example, my thermostat has simply been directly imbued with the goal of maintaining a temperature. I can also imagine this is how material goals play out in most insects.
This makes a lot of sense. Yeah I was definitely simplifying all emotions to just their qualia effect, without considering their other physiological effects which define them. So I guess in this post when I say “emotion”, I really mean “qualia”.
Just to clarify, are you using “reward” here to also mean “positive (or a lack of negative) qualia”. Or is this reinforcement mechanism recursive by which we might learn to value something because of its predicted reward, but that reward is also a learned value.… and so on where the base case is an emotional reward. If so, how deep can it go?