So, you’ve described a human preference for having the things we’d want to happen to us also happen to systems we recognize as sufficiently like ourselves. Call that preference P.
A preference utilitarian would say that the moral value of a choice is proportional to the degree to which P is satisfied by that choice. (All else being equal.)
If I’ve understood you correctly, you reject preference utilitarianism as a moral framework. Instead, you suggest a deontological framework based on “agency.” And agency is a concept you came up with to encapsulate whatever properties humans have that “justify” preferring P.
Have I followed you so far?
OK. Can you say more about how a preference is justified?
For example, you conclude that humans are justified in preferring P on the basis of various attributes of humans (the ability to take action based on expected consequences, the ability to make “meta-level” choices, “a set of actions,” and maybe something about learning). I infer you believe we’re _un_justified in preferring P on the basis of other attributes (say, skin color, or height above sea level, or tendency to slaughter other humans).
Is that right?
How did you arrive at those particular attributes?
So, you’ve described a human preference for having the things we’d want to happen to us also happen to systems we recognize as sufficiently like ourselves. Call that preference P.
A preference utilitarian would say that the moral value of a choice is proportional to the degree to which P is satisfied by that choice. (All else being equal.)
I was starting from my own intuitions about my moral preferences. But if you stop at treating morality as a preference you run into problems when people don’t share these preferences. A common variation might for example be that people believe it is good for the strong to prey on the weak. But with morality being an interpersonal thing, any morality must account for differences in preferences and therefore cannot be based in preferences. That’s why I reject preference utilitarianism.
And agency is a concept you came up with to encapsulate whatever properties humans have that “justify” preferring P.
My agency based morality does justify my moral preferences, but it doesn’t “just” justify my moral preferences. I only have my moral preferences as a starting point. From that I construct an abstract moral framework, check if that abstract framework satisfies conditions of consistency and plausibility, and after I’ve been convinced it does, use it to justify or adjust my moral preferences.
Other people might come to different conclusions using this process, but since now our moral framework is removed from mere preferences we can use properties of the frameworks in question to try and integrate them or decide between them. A preference utilitarian would have to resort to some unjustified selection method like majority vote.
So how do I come up with the properties of a moral framework that make it better than an other? I don’t know yet. I would suggest that minimalism is a good property. With a non minimal framework people could always ask, “why should we adopt this policy?”. With a minimal framework it’s either adopt all of it or don’t adopt it at all. I also justify agency as the primary motivation since our agency is what creates the problem in the first place. Without choice we have no use for morality. Without deliberation we couldn’t follow it. Without metalevel reasoning we couldn’t adopt it, etc. Short, agency is the very thing that creates a solvable problem of morality and thus is the best place to solve it. If we start to argue that point, then we are coming to a point where we run into gödelian incompleteness.
You keep tossing the word “justified” around, and I am increasingly unclear on how the work that you want that word to do is getting done.
For example: I agree with you that a preference utilitarian needs some mechanism for resolving situations where preferences conflict, but I’m not sure on what basis you conclude that such a mechanism must be unjustified, nor on what basis you conclude that your agency-based moral frameworks support a more justifiable method for integrating or deciding between different people’s conflicting framework-based-conclusions.
I find your “without X we wouldn’t have a problem and therefore X is the solution” argument unconvincing. Mostly it sounds to me like you’ve decided that your framework is cool, and now you’re looking for arguments to support it.
but I’m not sure on what basis you conclude that such a mechanism must be unjustified
I was thinking it needs to be separately justified and is not justified from the principle of preference utilitarianism.
I find your “without X we wouldn’t have a problem and therefore X is the solution” argument unconvincing.
It’s a basic principle of engineering to solve a problem where it occurs. I think we’ve reached the point where I am not prepared to argue any further and don’t think it would be fruitful to try. I thank you for the challenge.
Mostly it sounds to me like you’ve decided that your framework is cool, and now you’re looking for arguments to support it.
That might be the case but I don’t think it likely. I am an asshole enough to do what I want even without moral justification and I am a cynic enough not to expect anything else from other people. I was writing my original comment merely as an additional comment to the morality debate on Less Wrong because I believe that if Eliezer would create his FAI tomorrow it wouldn’t be friendly towards me. The rest was just trying to answer your questions because I really think they helped me to think it through.
So, you’ve described a human preference for having the things we’d want to happen to us also happen to systems we recognize as sufficiently like ourselves. Call that preference P.
A preference utilitarian would say that the moral value of a choice is proportional to the degree to which P is satisfied by that choice. (All else being equal.)
If I’ve understood you correctly, you reject preference utilitarianism as a moral framework. Instead, you suggest a deontological framework based on “agency.” And agency is a concept you came up with to encapsulate whatever properties humans have that “justify” preferring P.
Have I followed you so far?
OK. Can you say more about how a preference is justified?
For example, you conclude that humans are justified in preferring P on the basis of various attributes of humans (the ability to take action based on expected consequences, the ability to make “meta-level” choices, “a set of actions,” and maybe something about learning). I infer you believe we’re _un_justified in preferring P on the basis of other attributes (say, skin color, or height above sea level, or tendency to slaughter other humans).
Is that right?
How did you arrive at those particular attributes?
I was starting from my own intuitions about my moral preferences. But if you stop at treating morality as a preference you run into problems when people don’t share these preferences. A common variation might for example be that people believe it is good for the strong to prey on the weak. But with morality being an interpersonal thing, any morality must account for differences in preferences and therefore cannot be based in preferences. That’s why I reject preference utilitarianism.
My agency based morality does justify my moral preferences, but it doesn’t “just” justify my moral preferences. I only have my moral preferences as a starting point. From that I construct an abstract moral framework, check if that abstract framework satisfies conditions of consistency and plausibility, and after I’ve been convinced it does, use it to justify or adjust my moral preferences.
Other people might come to different conclusions using this process, but since now our moral framework is removed from mere preferences we can use properties of the frameworks in question to try and integrate them or decide between them. A preference utilitarian would have to resort to some unjustified selection method like majority vote.
So how do I come up with the properties of a moral framework that make it better than an other? I don’t know yet. I would suggest that minimalism is a good property. With a non minimal framework people could always ask, “why should we adopt this policy?”. With a minimal framework it’s either adopt all of it or don’t adopt it at all. I also justify agency as the primary motivation since our agency is what creates the problem in the first place. Without choice we have no use for morality. Without deliberation we couldn’t follow it. Without metalevel reasoning we couldn’t adopt it, etc. Short, agency is the very thing that creates a solvable problem of morality and thus is the best place to solve it. If we start to argue that point, then we are coming to a point where we run into gödelian incompleteness.
You keep tossing the word “justified” around, and I am increasingly unclear on how the work that you want that word to do is getting done.
For example: I agree with you that a preference utilitarian needs some mechanism for resolving situations where preferences conflict, but I’m not sure on what basis you conclude that such a mechanism must be unjustified, nor on what basis you conclude that your agency-based moral frameworks support a more justifiable method for integrating or deciding between different people’s conflicting framework-based-conclusions.
I find your “without X we wouldn’t have a problem and therefore X is the solution” argument unconvincing. Mostly it sounds to me like you’ve decided that your framework is cool, and now you’re looking for arguments to support it.
I was thinking it needs to be separately justified and is not justified from the principle of preference utilitarianism.
It’s a basic principle of engineering to solve a problem where it occurs. I think we’ve reached the point where I am not prepared to argue any further and don’t think it would be fruitful to try. I thank you for the challenge.
That might be the case but I don’t think it likely. I am an asshole enough to do what I want even without moral justification and I am a cynic enough not to expect anything else from other people. I was writing my original comment merely as an additional comment to the morality debate on Less Wrong because I believe that if Eliezer would create his FAI tomorrow it wouldn’t be friendly towards me. The rest was just trying to answer your questions because I really think they helped me to think it through.