It is, of course, possible that moral reasoning isn’t actually any kind of valid reasoning, but does amount to a “random walk” of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you’re not even talking about the same thing you were before hearing it. Which is worrying.
Yes, exactly. This seems to me pretty likely to be the case for humans. Even if it’s actually not the case, nobody has done the work to rule it out yet (has anyone even written a post making any kind of argument that it’s not the case?), so how do we know that it’s not the case? Doesn’t it seem to you that we might be doing some motivated cognition in order to jump to a comforting conclusion?
Yes, exactly. This seems to me pretty likely to be the case for humans. Even if it’s actually not the case, nobody has done the work to rule it out yet (has anyone even written a post making any kind of argument that it’s not the case?), so how do we know that it’s not the case? Doesn’t it seem to you that we might be doing some motivated cognition in order to jump to a comforting conclusion?