Right, so my point is that if your theory (that moral reasoning is probabilistic reasoning about some mathematical object) is to be correct, we need a definition of morality as a mathematical object which isn’t “what X says after considering all possible moral arguments”. So what could it be then? What definition Y can we give, such that it makes sense to say “when we reason about morality, we are really doing probabilistic reasoning about the mathematical object Y”?
Secondly, until we have a candidate definition Y at hand, we can’t show that moral reasoning really does correspond to probabilistic logical reasoning about Y. (And we’d also have to first understand what “probabilistic logical reasoning” is.) So, at this point, how can we be confident that moral reasoning does correspond to probabilistic logical reasoning about anything mathematical, and isn’t just some sort of random walk or some sort of reasoning that’s different from probabilistic logical reasoning?
Right, so my point is that if your theory (that moral reasoning is probabilistic reasoning about some mathematical object) is to be correct, we need a definition of morality as a mathematical object which isn’t “what X says after considering all possible moral arguments”. So what could it be then? What definition Y can we give, such that it makes sense to say “when we reason about morality, we are really doing probabilistic reasoning about the mathematical object Y”?
Unfortunately I doubt I can give you a short direct definition of morality. However if such a mathematical object exists, “what X says after considering all possible moral arguments” should be enough to pin it down (disregarding the caveats to do with our subject going insane, etc).
Secondly, until we have a candidate definition Y at hand, we can’t show that moral reasoning really does correspond to probabilistic logical reasoning about Y. (And we’d also have to first understand what “probabilistic logical reasoning” is.) So, at this point, how can we be confident that moral reasoning does correspond to probabilistic logical reasoning about anything mathematical, and isn’t just some sort of random walk or some sort of reasoning that’s different from probabilistic logical reasoning?
Well, I think it safe to assume I mean something by moral talk, otherwise I wouldn’t care so much about whether things are right or wrong. I must be talking about something, because that something is wired into my decision system. And I presume this something is mathematical, because (assuming I mean something by “P is good”) you can take the set of all good things, and this set is the same in all counterfactuals. Roughly speaking.
It is, of course, possible that moral reasoning isn’t actually any kind of valid reasoning, but does amount to a “random walk” of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you’re not even talking about the same thing you were before hearing it. Which is worrying.
Also it’s possible that moral talk in particular is mostly signalling intended to disguise our true values which are very similar but more selfish. But that doesn’t make a lot of difference since you can still cash out your values as a mathematical object of some sort.
It is, of course, possible that moral reasoning isn’t actually any kind of valid reasoning, but does amount to a “random walk” of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you’re not even talking about the same thing you were before hearing it. Which is worrying.
Yes, exactly. This seems to me pretty likely to be the case for humans. Even if it’s actually not the case, nobody has done the work to rule it out yet (has anyone even written a post making any kind of argument that it’s not the case?), so how do we know that it’s not the case? Doesn’t it seem to you that we might be doing some motivated cognition in order to jump to a comforting conclusion?
“what X says after considering all possible moral arguments”
I know you’re not arguing for this but I can’t help noting the discrepancy between the simplicity of the phrase “all possible moral arguments”, and what it would mean if it can be defined at all.
Right, so my point is that if your theory (that moral reasoning is probabilistic reasoning about some mathematical object) is to be correct, we need a definition of morality as a mathematical object which isn’t “what X says after considering all possible moral arguments”. So what could it be then? What definition Y can we give, such that it makes sense to say “when we reason about morality, we are really doing probabilistic reasoning about the mathematical object Y”?
Secondly, until we have a candidate definition Y at hand, we can’t show that moral reasoning really does correspond to probabilistic logical reasoning about Y. (And we’d also have to first understand what “probabilistic logical reasoning” is.) So, at this point, how can we be confident that moral reasoning does correspond to probabilistic logical reasoning about anything mathematical, and isn’t just some sort of random walk or some sort of reasoning that’s different from probabilistic logical reasoning?
Unfortunately I doubt I can give you a short direct definition of morality. However if such a mathematical object exists, “what X says after considering all possible moral arguments” should be enough to pin it down (disregarding the caveats to do with our subject going insane, etc).
Well, I think it safe to assume I mean something by moral talk, otherwise I wouldn’t care so much about whether things are right or wrong. I must be talking about something, because that something is wired into my decision system. And I presume this something is mathematical, because (assuming I mean something by “P is good”) you can take the set of all good things, and this set is the same in all counterfactuals. Roughly speaking.
It is, of course, possible that moral reasoning isn’t actually any kind of valid reasoning, but does amount to a “random walk” of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you’re not even talking about the same thing you were before hearing it. Which is worrying.
Also it’s possible that moral talk in particular is mostly signalling intended to disguise our true values which are very similar but more selfish. But that doesn’t make a lot of difference since you can still cash out your values as a mathematical object of some sort.
Yes, exactly. This seems to me pretty likely to be the case for humans. Even if it’s actually not the case, nobody has done the work to rule it out yet (has anyone even written a post making any kind of argument that it’s not the case?), so how do we know that it’s not the case? Doesn’t it seem to you that we might be doing some motivated cognition in order to jump to a comforting conclusion?
I know you’re not arguing for this but I can’t help noting the discrepancy between the simplicity of the phrase “all possible moral arguments”, and what it would mean if it can be defined at all.
But then many things are “easier said than done”.