The part about computation doesn’t change the fundamental structure of the theory. It’s true that it creates more room for superficial disagreement and fallibility (of similar status to disagreements and fallibility regarding the effective means to some shared terminal values), but I see this as an improvement in degree and not in kind. It still doesn’t allow for fundamental disagreement and fallibility, e.g. amongst logically omniscient agents.
(I take it to be a metaethical datum that even people with different terminal values, or different Eliezerian “computations”, can share the concept of a normative reason, and sincerely disagree about which (if either) of their values/computations is correctly tracking the normative reasons. Similarly, we can coherently doubt whether even our coherently-extrapolated volitions would be on the right track or not.)
It still doesn’t allow for fundamental disagreement and fallibility, e.g. amongst logically omniscient agents.
It’s not clear to me why there must be fundamental disagreement and fallibility, e.g. amongst logically omniscient agents. Can you refer me to an argument or intuition pump that explains why you think that?
One related argument is the Open Question Argument: for any natural property F that an action might have, be it promotes my terminal values, or is the output of an Eliezerian computation that models my coherent extrapolated volition, or whatever the details might be, it’s always coherent to ask: “I agree that this action is F, but is it good?”
But the intuitions that any metaethics worthy of the name must allow for fundamental disagreement and fallibility are perhaps more basic than this. I’d say they’re just the criteria that we (at least, many of us) have in mind when insisting that any morality worthy of the name must be “objective”, in a certain sense. These two criteria are proposed as capturing that sense of objectivity that we have in mind. (Again, don’t you find something bizarrely subjectivist about the idea that we’re fundamentally morally infallible—that we can’t even question whether our fundamental values / CEV are really on the right track?)
I’d say they’re just the criteria that we (at least, many of us) have in mind when insisting that any morality worthy of the name must be “objective”, in a certain sense.
What would you say to someone who does not share your intuition that such “objective” morality likely exists?
My main problem with objective morality is that while it’s hard to deny that there seem to be mind-independent moral facts like “pain is morally bad”, there doesn’t seem to be enough such facts to build an ethical system out of them. What natural phenomena count as pain, exactly? How do we trade off between pain and pleasure? How do we trade off between pain in one person, and annoyance in many others? How do we trade off pain across time (i.e., should we discount future pain, if so how)? Across possible worlds? How do we morally treat identical copies? It seems really hard, perhaps impossible, to answer these questions without using subjective preferences or intuitions that vary from person to person, or worse, just picking arbitrary answers when we don’t even have any relevant preferences or intuitions. If it turns out that such subjectivity and/or arbitrariness can’t be avoided, that would be hard to square with objective morality actually existing.
(Again, don’t you find something bizarrely subjectivist about the idea that we’re fundamentally morally infallible—that we can’t even question whether our fundamental values / CEV are really on the right track?)
I do think there’s something wrong with saying that we can’t question whether CEV is really on the right track. But I wouldn’t use the words “bizarrely subjectivist”. To me the problem is just that I clearly can and do question whether CEV is really on the right track. Fixing this seems to require retreating quite a bit from Eliezer’s metaethical position (but perhaps there is some other solution that I’m not thinking of). At this point I would personally take the following (minimalist) position:
At least some people, at least some of the time, refer to the same concept by “morality” as me and they have substantive disagreements over its nature and content.
I’m not confident about any of its properties.
Running CEV (if it were practical to) seems like a good way to learn more about the nature and content of morality, but there may be (probably are) better ways.
If it turns out that such subjectivity and/or arbitrariness can’t be avoided, that would be hard to square with objective morality actually existing.
Compare with formal systems giving first-order theories of standard model of natural numbers. You can’t specify the whole thing, and at some point you run into (independent of what comes before) statements for which it’s hard to decide whether they hold for the standard naturals, and so you could add to the theory either those statements or their negation. Does this break the intuition that there is some intended structure corresponding to natural numbers, or more pragmatically that we can still usefully seek better theories that capture it? For me, it doesn’t in any obvious way.
It seems to be an argument in favor of arithmetic being objective that almost everyone agree that a certain a set of axioms correctly characterize what natural numbers are (even if incompletely), and from that set of axioms we can derive much (even if not all) of what we want to know about the properties of natural numbers. If arithmetic were in the same situation as morality is today, it would be much harder (i.e., more counterintuitive) to claim that (1) everyone is referring to the same thing by “arithmetic” and “natural numbers” and (2) arithmetic truths are mind-independent.
To put it another way, conditional on objective morality existing, you’d expect the situation to be closer to that of arithmetic. Conditional on it not existing, you’d expect the situation to be closer to what it actually is.
What would you say to someone who does not share your intuition that such “objective” morality likely exists?
I’d say: be an error theorist! If you don’t think objective morality exists, then you don’t think that morality exists. That’s a perfectly respectable position. You can still agree with me about what it would take for morality to really exist. You just don’t think that our world actually has what it takes.
Yes, that makes sense, except that my intuition that objective morality does not exist is not particularly strong either. I guess what I was really asking was, do you have any arguments to the effect that objective morality exists?
The part about computation doesn’t change the fundamental structure of the theory. It’s true that it creates more room for superficial disagreement and fallibility (of similar status to disagreements and fallibility regarding the effective means to some shared terminal values), but I see this as an improvement in degree and not in kind. It still doesn’t allow for fundamental disagreement and fallibility, e.g. amongst logically omniscient agents.
(I take it to be a metaethical datum that even people with different terminal values, or different Eliezerian “computations”, can share the concept of a normative reason, and sincerely disagree about which (if either) of their values/computations is correctly tracking the normative reasons. Similarly, we can coherently doubt whether even our coherently-extrapolated volitions would be on the right track or not.)
It’s not clear to me why there must be fundamental disagreement and fallibility, e.g. amongst logically omniscient agents. Can you refer me to an argument or intuition pump that explains why you think that?
One related argument is the Open Question Argument: for any natural property F that an action might have, be it promotes my terminal values, or is the output of an Eliezerian computation that models my coherent extrapolated volition, or whatever the details might be, it’s always coherent to ask: “I agree that this action is F, but is it good?”
But the intuitions that any metaethics worthy of the name must allow for fundamental disagreement and fallibility are perhaps more basic than this. I’d say they’re just the criteria that we (at least, many of us) have in mind when insisting that any morality worthy of the name must be “objective”, in a certain sense. These two criteria are proposed as capturing that sense of objectivity that we have in mind. (Again, don’t you find something bizarrely subjectivist about the idea that we’re fundamentally morally infallible—that we can’t even question whether our fundamental values / CEV are really on the right track?)
What would you say to someone who does not share your intuition that such “objective” morality likely exists?
My main problem with objective morality is that while it’s hard to deny that there seem to be mind-independent moral facts like “pain is morally bad”, there doesn’t seem to be enough such facts to build an ethical system out of them. What natural phenomena count as pain, exactly? How do we trade off between pain and pleasure? How do we trade off between pain in one person, and annoyance in many others? How do we trade off pain across time (i.e., should we discount future pain, if so how)? Across possible worlds? How do we morally treat identical copies? It seems really hard, perhaps impossible, to answer these questions without using subjective preferences or intuitions that vary from person to person, or worse, just picking arbitrary answers when we don’t even have any relevant preferences or intuitions. If it turns out that such subjectivity and/or arbitrariness can’t be avoided, that would be hard to square with objective morality actually existing.
I do think there’s something wrong with saying that we can’t question whether CEV is really on the right track. But I wouldn’t use the words “bizarrely subjectivist”. To me the problem is just that I clearly can and do question whether CEV is really on the right track. Fixing this seems to require retreating quite a bit from Eliezer’s metaethical position (but perhaps there is some other solution that I’m not thinking of). At this point I would personally take the following (minimalist) position:
At least some people, at least some of the time, refer to the same concept by “morality” as me and they have substantive disagreements over its nature and content.
I’m not confident about any of its properties.
Running CEV (if it were practical to) seems like a good way to learn more about the nature and content of morality, but there may be (probably are) better ways.
Compare with formal systems giving first-order theories of standard model of natural numbers. You can’t specify the whole thing, and at some point you run into (independent of what comes before) statements for which it’s hard to decide whether they hold for the standard naturals, and so you could add to the theory either those statements or their negation. Does this break the intuition that there is some intended structure corresponding to natural numbers, or more pragmatically that we can still usefully seek better theories that capture it? For me, it doesn’t in any obvious way.
It seems to be an argument in favor of arithmetic being objective that almost everyone agree that a certain a set of axioms correctly characterize what natural numbers are (even if incompletely), and from that set of axioms we can derive much (even if not all) of what we want to know about the properties of natural numbers. If arithmetic were in the same situation as morality is today, it would be much harder (i.e., more counterintuitive) to claim that (1) everyone is referring to the same thing by “arithmetic” and “natural numbers” and (2) arithmetic truths are mind-independent.
To put it another way, conditional on objective morality existing, you’d expect the situation to be closer to that of arithmetic. Conditional on it not existing, you’d expect the situation to be closer to what it actually is.
I’d say: be an error theorist! If you don’t think objective morality exists, then you don’t think that morality exists. That’s a perfectly respectable position. You can still agree with me about what it would take for morality to really exist. You just don’t think that our world actually has what it takes.
Yes, that makes sense, except that my intuition that objective morality does not exist is not particularly strong either. I guess what I was really asking was, do you have any arguments to the effect that objective morality exists?