Are you saying that one can be in two moral states in the same moment?
Sort of. I’m saying that one’s moral state, as we typically think about it, can be modeled (in a simplified way) as a sum of the god and bad things you’ve done.
So let’s say you do a bad thing, pushing you from morally neutral at 0 down to −6. But, you feel correctly guilty about it, which is +2 good. Now your moral state is at −4, which is better but still not up in the positive numbers that we’d call “overall morally good”.
This is also why there isn’t any flip-flopping back and forth: guilt in this model is about a particular historical action, not your current overall moral state.
I can also approach this from the other side and show that a simple good/bad binary doesn’t work as a model of our sense of moral evaluation. If you imagine the various combinations in the table I made earlier, it feels right (at least to me) to be able to sort them from most to least moral like so:
Didn’t do anything bad, doesn’t feel guilty
Didn’t do anything bad, feels guilty
Did something bad, feels guilty
Did something bad, doesn’t feel guilty
If that (or any other single ordering) also feels about right to you, then it follows that you’d need more than just one bit of input (i.e. whether or not a person feels guilty) to morally evaluate a person; with only one bit of information, you can only sort people into two groups, you couldn’t come up with an ordering for four people like above. Therefore your earlier statement that “believing you’re bad makes you good and vice versa” can’t be correct, because it only takes as input that one bit of information.
Sort of. I’m saying that one’s moral state, as we typically think about it, can be modeled (in a simplified way) as a sum of the god and bad things you’ve done.
So let’s say you do a bad thing, pushing you from morally neutral at 0 down to −6. But, you feel correctly guilty about it, which is +2 good. Now your moral state is at −4, which is better but still not up in the positive numbers that we’d call “overall morally good”.
This is also why there isn’t any flip-flopping back and forth: guilt in this model is about a particular historical action, not your current overall moral state.
I can also approach this from the other side and show that a simple good/bad binary doesn’t work as a model of our sense of moral evaluation. If you imagine the various combinations in the table I made earlier, it feels right (at least to me) to be able to sort them from most to least moral like so:
Didn’t do anything bad, doesn’t feel guilty
Didn’t do anything bad, feels guilty
Did something bad, feels guilty
Did something bad, doesn’t feel guilty
If that (or any other single ordering) also feels about right to you, then it follows that you’d need more than just one bit of input (i.e. whether or not a person feels guilty) to morally evaluate a person; with only one bit of information, you can only sort people into two groups, you couldn’t come up with an ordering for four people like above. Therefore your earlier statement that “believing you’re bad makes you good and vice versa” can’t be correct, because it only takes as input that one bit of information.