First of all, thanks for the comment. You have really motivated me to read and think about this more
That’s what I like to hear!
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I’m not convinced that there is objective truth in intrinsic or moral values.
But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn’t matter.
I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really “bad” things if the beliefs and intrinsic values on which it is acting are “bad”. Why else would anyone be scared of AI?
I don’t require their values to converge, I require them to accept the truths of certain claims. This happens in real
life. People say “I don’t like X, but I respect your right to do it”. The first part says X is a disvalue, the second is an override coming from rationality.
That’s what I like to hear!
But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn’t matter.
I don’t require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say “I don’t like X, but I respect your right to do it”. The first part says X is a disvalue, the second is an override coming from rationality.