Human beings might be thought of as superrational agents who make mistakes (moral errors). I don’t know of a good technical model for this, but I feel like one recommendation that would come out of it is to not punish people disproportionately, because what if others did that to you when you make a mistake?
Not exactly about the same thing, but see this.