Right, I phrased that very badly. What I was trying to say is that the moral intuition was trained (evolutionarily or whatever) to map from behaviors to right/wrongness based on the (weighted) set of possible outcomes. So when we’re given a behavior, the intuition spits out a right/wrong decision based on what was likely to have happened, not considering what was stipulated in the problem to have actually happened.
See, and I was going to write that your second paragraph was more insightful and didn’t really follow from your first paragraph. I was going to say that it seemed like that was what moral intuition was actually calculating inaccessibly, so it is indeed interesting that (as far as Greene reports) nobody does come out with it as a rationalization. But then I held off, because I thought that I was just projecting my own thought process onto your words, and you might have meant something more in line with your first paragraph by them.
Right, I phrased that very badly. What I was trying to say is that the moral intuition was trained (evolutionarily or whatever) to map from behaviors to right/wrongness based on the (weighted) set of possible outcomes. So when we’re given a behavior, the intuition spits out a right/wrong decision based on what was likely to have happened, not considering what was stipulated in the problem to have actually happened.
See, and I was going to write that your second paragraph was more insightful and didn’t really follow from your first paragraph. I was going to say that it seemed like that was what moral intuition was actually calculating inaccessibly, so it is indeed interesting that (as far as Greene reports) nobody does come out with it as a rationalization. But then I held off, because I thought that I was just projecting my own thought process onto your words, and you might have meant something more in line with your first paragraph by them.