I’m not convinced there are things that are objectively valuable. I’m of the belief that if there are no agents to value something, then that something has effectively no value.
The second sentence doesn’t follow from the first. If rational agents converge on their values, that is objective enough. Analogy: one can accept that mathematical truth is objective (mathematicians will converge) without being a Platonists (mathematical truths have an existence separate from humans)
Without objective values, it might just be a matter of testing different sets of terminal subjective values, until we find the optimum (an hopefully don’t get trapped in a local maximum).
I fin d that hard to follow. If the test i rationally justifiable, and leads to the uniform results, how is that not objective?
You seem to be using “objective” (having a truth value independent of individual humans) to mean what I would mean by “real” (having existence independent of humans).
First of all, thanks for the comment. You have really motivated me to read and think about this more—starting with getting clearer on the meanings of “objective”, “subjective”, and “intrinsic”. I apologise for any confusion caused by my incorrect use of terminology. I guess that is why Eliezer likes to taboo words. I hope you don’t mind me persisting in trying to explain my view and using those “taboo” words.
Since I was talking about meta-ethical moral relativism, I hope that it was sufficiently clear that I was referring to moral values. What I meant by “objective values” was “objectively true moral values” or “objectively true intrinsic values”.
The second sentence doesn’t follow from the first.
The second sentence was an explanation of the first: not logically derived from the first sentence, but a part of the argument. I’ll try to construct my arguments more linearly in future.
If I had to rephrase that passage I’d say:
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I’m not convinced that there is objective truth in intrinsic or moral values.
However, the lack of meaningful values in the absence of agents hints at agents themselves being valuable. If value can only have meaning in the presence of an agent, then that agent probably has, at the very least, extrinsic/instrumental value. Even a paperclip maximiser would probably consider itself to have instrumental value, right?
If rational agents converge on their values, that is objective enough.
I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really “bad” things if the beliefs and intrinsic values on which it is acting are “bad”. Why else would anyone be scared of AI?
Analogy: one can accept that mathematical truth is objective (mathematicians will converge) without being a Platonists (mathematical truths have an existence separate from humans)
I accept the possibility of objective truth values. I’m not convinced that it is objectively true that the convergence of subjectively true moral values indicates objectively true moral values. As far as values go, moral values don’t seem to be as amenable to rigorous proofs as formal mathematical theorems. We could say that intrinsic values seem to be analogous to mathematical axioms.
I fin d that hard to follow. If the test i rationally justifiable, and leads to the uniform results, how is that not objective?
I’ll have a go at clarifying that passage with the right(?) terminology:
Without the objective truth of intrinsic values, it might just be a matter of testing different sets of assumed intrinsic values until we find an “optimal” or acceptable convergent outcome.
Morality might be somewhat like an NP-hard optimisation problem. It might be objectively true that we get a certain result from a test. It’s more difficult to say that it is objectively true that we have solved a complex optimisation problem.
You seem to be using “objective” (having a truth value independent of individual humans) to mean what I would mean by “real” (having existence independent of humans).
Thanks for informing me that my use of the term “objective” was confused/confusing. I’ll keep trying to improve the clarity of my communication and understanding of the terminology.
First of all, thanks for the comment. You have really motivated me to read and think about this more
That’s what I like to hear!
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I’m not convinced that there is objective truth in intrinsic or moral values.
But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn’t matter.
I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really “bad” things if the beliefs and intrinsic values on which it is acting are “bad”. Why else would anyone be scared of AI?
I don’t require their values to converge, I require them to accept the truths of certain claims. This happens in real
life. People say “I don’t like X, but I respect your right to do it”. The first part says X is a disvalue, the second is an override coming from rationality.
The second sentence doesn’t follow from the first. If rational agents converge on their values, that is objective enough. Analogy: one can accept that mathematical truth is objective (mathematicians will converge) without being a Platonists (mathematical truths have an existence separate from humans)
I fin d that hard to follow. If the test i rationally justifiable, and leads to the uniform results, how is that not objective? You seem to be using “objective” (having a truth value independent of individual humans) to mean what I would mean by “real” (having existence independent of humans).
First of all, thanks for the comment. You have really motivated me to read and think about this more—starting with getting clearer on the meanings of “objective”, “subjective”, and “intrinsic”. I apologise for any confusion caused by my incorrect use of terminology. I guess that is why Eliezer likes to taboo words. I hope you don’t mind me persisting in trying to explain my view and using those “taboo” words.
Since I was talking about meta-ethical moral relativism, I hope that it was sufficiently clear that I was referring to moral values. What I meant by “objective values” was “objectively true moral values” or “objectively true intrinsic values”.
The second sentence was an explanation of the first: not logically derived from the first sentence, but a part of the argument. I’ll try to construct my arguments more linearly in future.
If I had to rephrase that passage I’d say:
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I’m not convinced that there is objective truth in intrinsic or moral values.
However, the lack of meaningful values in the absence of agents hints at agents themselves being valuable. If value can only have meaning in the presence of an agent, then that agent probably has, at the very least, extrinsic/instrumental value. Even a paperclip maximiser would probably consider itself to have instrumental value, right?
I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really “bad” things if the beliefs and intrinsic values on which it is acting are “bad”. Why else would anyone be scared of AI?
I accept the possibility of objective truth values. I’m not convinced that it is objectively true that the convergence of subjectively true moral values indicates objectively true moral values. As far as values go, moral values don’t seem to be as amenable to rigorous proofs as formal mathematical theorems. We could say that intrinsic values seem to be analogous to mathematical axioms.
I’ll have a go at clarifying that passage with the right(?) terminology:
Without the objective truth of intrinsic values, it might just be a matter of testing different sets of assumed intrinsic values until we find an “optimal” or acceptable convergent outcome.
Morality might be somewhat like an NP-hard optimisation problem. It might be objectively true that we get a certain result from a test. It’s more difficult to say that it is objectively true that we have solved a complex optimisation problem.
Thanks for informing me that my use of the term “objective” was confused/confusing. I’ll keep trying to improve the clarity of my communication and understanding of the terminology.
That’s what I like to hear!
But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn’t matter.
I don’t require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say “I don’t like X, but I respect your right to do it”. The first part says X is a disvalue, the second is an override coming from rationality.