So are you arguing that the metaethics sequence is wrong and that moral arguments are meaningless
Something like that, except I don’t know enough to claim that moral arguments are meaningless, just that the problem is unsolved. Eliezer seems to think that moral arguments are meaningful, but their meanings are derived only from how humans happen to respond to them (or more specifically to whatever coherence humans may show in their responses to moral arguments). I’ve writtenbefore about why I find this unsatisfactory. Perhaps more relevant for the current discussion, I don’t see much reason to think this kind of “meaning” is sufficient to allow us to fully reduce concepts like “pain” and “self”.
Eliezer seems to think that moral arguments are meaningful, but their meanings are derived only from how humans happen to respond to them (or more specifically to whatever coherence humans may show in their responses to moral arguments).
No. What he actually says is that when we do moral reasoning we are approximating some computation in the same ways that the pebblesorters are approximating primality. What make moral arguments valid or invalid is whether the arguments actually establish what they where trying to establish in the context of the actual concept of rightness which is being approximated in the same way that an argument by a pebblesorter that 6 is an incorrect heap because 6=2*3 is judged valid because that is an argument establishing 6 is not a prime number. Of course, to determine what computation we actually are trying to approximate or to establish that we actually are approximating something, looking at the coherence human show in their response to moral arguments is an excellent idea, but it is not how you define morality.
Looking at the first link you provided, I think that looking at where people moralities concentrate as you present moral arguments to them is totally the wrong way to look at this problem. Consider for example Goldbach’s Conjecture. If you where to take peoples and present random arguments to them about the conjecture it doesn’t seem to necessarily be the case, depending on what distribution you use on possible arguments, that people opinions will concentrate to the correct conclusion concerning the validity of the conjecture. That doesn’t mean that people can’t talk meaningfully about the validity of Goldbach’s Conjecture. Should we be able to derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments? The answer is yes, but it isn’t a trivial problem.
As for the second link you provided, you argue that the way we react to moral arguments could be totally random or depend on trivial details, which doesn’t appear to be the case and which seems in contradiction with the fact that we do manage to agree about a lot of thing concerning morality and that moral progress do seems to be a thing.
Looking at the first link you provided, I think that looking at where people moralities concentrate as you present moral arguments to them is totally the wrong way to look at this problem.
You seem to be taking a position that’s different from Eliezer’s, since AFAIK he has consistently defended this approach that you call “wrong” (for example in the thread following my first linked comment). If you have some idea of how to “derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments” that doesn’t involve just “looking at where people moralities concentrate as you present moral arguments to them” then I’d be interested to know what it is.
ETA: Assuming “when we do moral reasoning we are approximating some computation”, what reasons do we have for thinking that the “some computation” will allow us to fully reduce “pain” to a set of truth conditions? What are some properties of this computation that you can cite as being known, that leads you to think this?
You seem to be taking a position that’s different from Eliezer’s, since AFAIK he has consistently defended this approach that you call “wrong” (for example in the thread following my first linked comment).
Well, Eliezer_2009 do seem to underestimate the difficulty of the extrapolation problem.
If you have some idea of how to “derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments” that doesn’t involve just “looking at where people moralities concentrate as you present moral arguments to them” then I’d be interested to know what it is.
Have I solved that problem? No. But humans do seem to be able to infer from the way pebblesorters reason that they are referring to primality and also it seem that by looking at mathematicians reasoning about Goldbach’s Conjecture we should be able to infer what they refer to when they speak about Goldbach’s Conjecture and we don’t do that by looking at what position they end up concentrating at when we present them with all possible arguments. So the problem ought to be solvable.
Assuming “when we do moral reasoning we are approximating some computation”, what reasons do we have for thinking that the “some computation” will allow us to fully reduce “pain” to a set of truth conditions? What are some properties of this computation that you can cite as being known, that leads you to think this?
Are you trying to imply that it could be the case that the computation that we approximate is only defined over a fixed ontology which doesn’t correspond to the correct ontology and simply return a domain error when one try to apply it to the real world? Well that doesn’t seem to be the case because we are able to do moral reasoning at a more fine-grained level than the naive ontology and a lot of the concept that seems relevant to moral reasoning, like qualia for example, seems like they ought to have canonical reductions. I detail the way in which I see us doing that kind of elucidation in this comment.
But humans do seem to be able to infer from the way pebblesorters reason that they are referring to primality and also it seem that by looking at mathematicians reasoning about Goldbach’s Conjecture we should be able to infer what they refer to when they speak about Goldbach’s Conjecture and we don’t do that by looking at what position they end up concentrating at when we present them with all possible arguments. So the problem ought to be solvable.
In each of those cases the pebblesorter reasoning / human mathematical reasoning is approximating some idealized reasoning system that is “well-behaved” in certain ways, for example not being sensitive to the order in which it encounters arguments. It’s also the case that there is only one such system “nearby” so that we don’t have to choose from multiple reasoning systems which one we are really approximating. (Although even in math there are some areas, such as set theory, with substantial unresolved debate as to what it is that we’re really talking about.) It’s unclear to me that either of these holds for human moral reasoning.
Well that doesn’t seem to be the case because we are able to do moral reasoning at a more fine-grained level than the naive ontology
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
What “certain moral matters” do you have in mind? As for existence of moral progress, see Konkvistador’s draft post Against moral progress.
This for a start.
I’ve always found that post problematic, and finally wrote down why. Any other examples?
Something like that, except I don’t know enough to claim that moral arguments are meaningless, just that the problem is unsolved. Eliezer seems to think that moral arguments are meaningful, but their meanings are derived only from how humans happen to respond to them (or more specifically to whatever coherence humans may show in their responses to moral arguments). I’ve written before about why I find this unsatisfactory. Perhaps more relevant for the current discussion, I don’t see much reason to think this kind of “meaning” is sufficient to allow us to fully reduce concepts like “pain” and “self”.
No. What he actually says is that when we do moral reasoning we are approximating some computation in the same ways that the pebblesorters are approximating primality. What make moral arguments valid or invalid is whether the arguments actually establish what they where trying to establish in the context of the actual concept of rightness which is being approximated in the same way that an argument by a pebblesorter that 6 is an incorrect heap because 6=2*3 is judged valid because that is an argument establishing 6 is not a prime number. Of course, to determine what computation we actually are trying to approximate or to establish that we actually are approximating something, looking at the coherence human show in their response to moral arguments is an excellent idea, but it is not how you define morality.
Looking at the first link you provided, I think that looking at where people moralities concentrate as you present moral arguments to them is totally the wrong way to look at this problem. Consider for example Goldbach’s Conjecture. If you where to take peoples and present random arguments to them about the conjecture it doesn’t seem to necessarily be the case, depending on what distribution you use on possible arguments, that people opinions will concentrate to the correct conclusion concerning the validity of the conjecture. That doesn’t mean that people can’t talk meaningfully about the validity of Goldbach’s Conjecture. Should we be able to derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments? The answer is yes, but it isn’t a trivial problem.
As for the second link you provided, you argue that the way we react to moral arguments could be totally random or depend on trivial details, which doesn’t appear to be the case and which seems in contradiction with the fact that we do manage to agree about a lot of thing concerning morality and that moral progress do seems to be a thing.
You seem to be taking a position that’s different from Eliezer’s, since AFAIK he has consistently defended this approach that you call “wrong” (for example in the thread following my first linked comment). If you have some idea of how to “derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments” that doesn’t involve just “looking at where people moralities concentrate as you present moral arguments to them” then I’d be interested to know what it is.
ETA: Assuming “when we do moral reasoning we are approximating some computation”, what reasons do we have for thinking that the “some computation” will allow us to fully reduce “pain” to a set of truth conditions? What are some properties of this computation that you can cite as being known, that leads you to think this?
Well, Eliezer_2009 do seem to underestimate the difficulty of the extrapolation problem.
Have I solved that problem? No. But humans do seem to be able to infer from the way pebblesorters reason that they are referring to primality and also it seem that by looking at mathematicians reasoning about Goldbach’s Conjecture we should be able to infer what they refer to when they speak about Goldbach’s Conjecture and we don’t do that by looking at what position they end up concentrating at when we present them with all possible arguments. So the problem ought to be solvable.
Are you trying to imply that it could be the case that the computation that we approximate is only defined over a fixed ontology which doesn’t correspond to the correct ontology and simply return a domain error when one try to apply it to the real world? Well that doesn’t seem to be the case because we are able to do moral reasoning at a more fine-grained level than the naive ontology and a lot of the concept that seems relevant to moral reasoning, like qualia for example, seems like they ought to have canonical reductions. I detail the way in which I see us doing that kind of elucidation in this comment.
In each of those cases the pebblesorter reasoning / human mathematical reasoning is approximating some idealized reasoning system that is “well-behaved” in certain ways, for example not being sensitive to the order in which it encounters arguments. It’s also the case that there is only one such system “nearby” so that we don’t have to choose from multiple reasoning systems which one we are really approximating. (Although even in math there are some areas, such as set theory, with substantial unresolved debate as to what it is that we’re really talking about.) It’s unclear to me that either of these holds for human moral reasoning.
Can you give a detailed example of this?
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
This for a start.
What “certain moral matters” do you have in mind? As for existence of moral progress, see Konkvistador’s draft post Against moral progress.
I’ve always found that post problematic, and finally wrote down why. Any other examples?