But humans do seem to be able to infer from the way pebblesorters reason that they are referring to primality and also it seem that by looking at mathematicians reasoning about Goldbach’s Conjecture we should be able to infer what they refer to when they speak about Goldbach’s Conjecture and we don’t do that by looking at what position they end up concentrating at when we present them with all possible arguments. So the problem ought to be solvable.
In each of those cases the pebblesorter reasoning / human mathematical reasoning is approximating some idealized reasoning system that is “well-behaved” in certain ways, for example not being sensitive to the order in which it encounters arguments. It’s also the case that there is only one such system “nearby” so that we don’t have to choose from multiple reasoning systems which one we are really approximating. (Although even in math there are some areas, such as set theory, with substantial unresolved debate as to what it is that we’re really talking about.) It’s unclear to me that either of these holds for human moral reasoning.
Well that doesn’t seem to be the case because we are able to do moral reasoning at a more fine-grained level than the naive ontology
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
What “certain moral matters” do you have in mind? As for existence of moral progress, see Konkvistador’s draft post Against moral progress.
This for a start.
I’ve always found that post problematic, and finally wrote down why. Any other examples?
In each of those cases the pebblesorter reasoning / human mathematical reasoning is approximating some idealized reasoning system that is “well-behaved” in certain ways, for example not being sensitive to the order in which it encounters arguments. It’s also the case that there is only one such system “nearby” so that we don’t have to choose from multiple reasoning systems which one we are really approximating. (Although even in math there are some areas, such as set theory, with substantial unresolved debate as to what it is that we’re really talking about.) It’s unclear to me that either of these holds for human moral reasoning.
Can you give a detailed example of this?
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
This for a start.
What “certain moral matters” do you have in mind? As for existence of moral progress, see Konkvistador’s draft post Against moral progress.
I’ve always found that post problematic, and finally wrote down why. Any other examples?