Eliezer’s original quote was better. Wasn’t it about superintelligences?
I wasn’t quoting Eliezer, I made (and stand by) a plain English claim. It does happen to be a similar in form to a recent instance of Eliezer summarily rejecting PhilGoetz declaration that rationalists don’t care about the future. That quote from Eliezer was about “expected-utility-maximising agents” which would make the quote rather inappropriate in the context.
I will actually strengthen my declaration to:
Because agents can care about whatever the hell they want to care about. (This too should be uncontroversial.)
Anyway you are not a superintelligence or a rational agent and therefore have not yet earned the right to want to want whatever you think you want to want.
An agent does not determine its preferences by mere vocalisation and nor does its belief about its preference intrinsically make them so. Nevertheless I do care about my utility function (with the vaguely specified caveats). If you could suggest a formalization sufficiently useful for decision making that I could care about it even more than my utility function then I would do so. But you cannot.
Then again I don’t have the right to deny rights so whatever.
No, you don’t. The only way you could apply limits on what I want is via physically altering my molecular makeup. As well as being rather difficult for you to do on any significant scale I could credibly claim that the new physical configuration you constructed from my atoms is other than ‘me’. You can’t get much more of a fundamental destruction of identity than by changing what an agent wants.
I don’t object to you declaring that you don’t have or don’t want to have a utility function. That’s your problem not mine. But I will certainly object to any interventions made that deny that others may have them.
I wasn’t quoting Eliezer, I made (and stand by) a plain English claim. It does happen to be a similar in form to a recent instance of Eliezer summarily rejecting PhilGoetz declaration that rationalists don’t care about the future. That quote from Eliezer was about “expected-utility-maximising agents” which would make the quote rather inappropriate in the context.
I will actually strengthen my declaration to:
Because agents can care about whatever the hell they want to care about. (This too should be uncontroversial.)
An agent does not determine its preferences by mere vocalisation and nor does its belief about its preference intrinsically make them so. Nevertheless I do care about my utility function (with the vaguely specified caveats). If you could suggest a formalization sufficiently useful for decision making that I could care about it even more than my utility function then I would do so. But you cannot.
No, you don’t. The only way you could apply limits on what I want is via physically altering my molecular makeup. As well as being rather difficult for you to do on any significant scale I could credibly claim that the new physical configuration you constructed from my atoms is other than ‘me’. You can’t get much more of a fundamental destruction of identity than by changing what an agent wants.
I don’t object to you declaring that you don’t have or don’t want to have a utility function. That’s your problem not mine. But I will certainly object to any interventions made that deny that others may have them.