I would be genuinely interested in hearing an explanation of this.
I’ll give you a cliff notes version that strips out all sense of sophistication. And probably a lot of the accuracy too.
Take my personal preferences. Inclusive preferences—so what I would want after you take into account all my ethics and allowing for me caring about other people’s preferences, etc.
Imagine those preferences are represented in Excel. There are references to all kinds of things, including other people’s excel spreadsheets and to other cells representing my immediate desires.
Now copy all the values and do a ‘right click, paste by value’. So now there is a list of all the values of what is right. But not necessarily reference to actual preferences (leaving aside now the possibility for objective reference to preferences, which are possible but not necessary).
Call the above ‘should’ or ‘right’. It is an objective set of values and whatnot that is a feature of the universe and built into the very meaning of the word ‘should’.
If you go and edit me in the future the value of should doesn’t change.
If other people have different values the value of should it doesn’t change what is ‘right’ except in so much as ‘should’ already took that into account via altruism.
If I edit myself (murder pill) then even that doesn’t change what is right. It just means that I end up having wrong preferences because I made an error in judgement.
If someone goes and builds a time machine and changes the me of the present such that I would create a different ‘paste by value’ spreadsheet then the original version is still the value of should. That is, it isn’t a reference to the values of me. It is a set of values that just so happen to match the overall preferences of me.
I can be wrong about ‘right’. It isn’t what I say or think. It is what I would think if I was superintelligent and overwhelmingly well informed about myself and the salient features of the universe.
If I had never existed the values in this spreadsheet would still be right. Nobody would know about them but that changes nothing! :P
This may sound complicated but it does match one of the senses in which we use ‘should’ or ‘right’ in common practice. It could be described as ‘subjectively objective’. For the purpose of dealing with other people with different preferences it is not that much different from moral relativism. Even though ‘should’ has a single objective meaning (mine! :P) it is still the kind of thing that is best to completely cut out from conversation for the purpose of negotiation.
I see. The origin of these values, which I will assume you could get precise enough to use as metrics for judging possible physical futures, is still effectively the utility function in your brain, no?
You could take a snapshot of your preference / value network at any time and define what is right accordingly, but I’m not clear on how it becomes a “feature of the universe”. It is objectively true that different futures will have different scores according to that particular set of values and preferences, but paying any attention to that set is contingent on your existence and arrival at the state where the snapshot is taken.
It’s odd. From what you’ve described I don’t think we disagree at all on the substance of the situation, but are just using some words differently. I think this line may hold the key:
For the purpose of dealing with other people with different preferences it is not that much different from moral relativism.
I would be genuinely interested in hearing an explanation of this.
I’ll give you a cliff notes version that strips out all sense of sophistication. And probably a lot of the accuracy too.
Take my personal preferences. Inclusive preferences—so what I would want after you take into account all my ethics and allowing for me caring about other people’s preferences, etc.
Imagine those preferences are represented in Excel. There are references to all kinds of things, including other people’s excel spreadsheets and to other cells representing my immediate desires.
Now copy all the values and do a ‘right click, paste by value’. So now there is a list of all the values of what is right. But not necessarily reference to actual preferences (leaving aside now the possibility for objective reference to preferences, which are possible but not necessary).
Call the above ‘should’ or ‘right’. It is an objective set of values and whatnot that is a feature of the universe and built into the very meaning of the word ‘should’.
If you go and edit me in the future the value of should doesn’t change.
If other people have different values the value of should it doesn’t change what is ‘right’ except in so much as ‘should’ already took that into account via altruism.
If I edit myself (murder pill) then even that doesn’t change what is right. It just means that I end up having wrong preferences because I made an error in judgement.
If someone goes and builds a time machine and changes the me of the present such that I would create a different ‘paste by value’ spreadsheet then the original version is still the value of should. That is, it isn’t a reference to the values of me. It is a set of values that just so happen to match the overall preferences of me.
I can be wrong about ‘right’. It isn’t what I say or think. It is what I would think if I was superintelligent and overwhelmingly well informed about myself and the salient features of the universe.
If I had never existed the values in this spreadsheet would still be right. Nobody would know about them but that changes nothing! :P
This may sound complicated but it does match one of the senses in which we use ‘should’ or ‘right’ in common practice. It could be described as ‘subjectively objective’. For the purpose of dealing with other people with different preferences it is not that much different from moral relativism. Even though ‘should’ has a single objective meaning (mine! :P) it is still the kind of thing that is best to completely cut out from conversation for the purpose of negotiation.
I see. The origin of these values, which I will assume you could get precise enough to use as metrics for judging possible physical futures, is still effectively the utility function in your brain, no?
You could take a snapshot of your preference / value network at any time and define what is right accordingly, but I’m not clear on how it becomes a “feature of the universe”. It is objectively true that different futures will have different scores according to that particular set of values and preferences, but paying any attention to that set is contingent on your existence and arrival at the state where the snapshot is taken.
It’s odd. From what you’ve described I don’t think we disagree at all on the substance of the situation, but are just using some words differently. I think this line may hold the key:
:)
That is what I was trying to convey. :)