you’d need a pretty convoluted consequentialist system to promote blame (and if you were willing to go that far, you could call a deontologist someone who wants to promote states of the world in which rules are followed and bad people are punished, and therefore a consequentialist at heart). Likewise, you could imagine a preference utilitarian who wants people to be punished just because e or a sufficient number of other people prefer it.
I’m not sure how complicated it would have to be. You might have some standard of benevolence (how disposed you are to do things that make people happy) and hold that other things being equal, it is better for benevolent people to be happy. True, you’d have to specify a number of parameters here, but it isn’t clear that you’d need enough to make it egregiously complex. (Or, on a variant, you could say how malevolent various past actions are and hold that outcomes are better when malevolent actions are punished to a certain extent.)
Also, I don’t think you can do a great job representing deontological views as trying to minimize the extent to which rules are broken by people in general. The reason has to do with the fact that deontological duties are usually thought to be agent-relative (and time-relative, probably). Deontologists think that I have a special duty to see to it that I don’t break promises in a way that I don’t have a duty to see to it that you don’t break promises. They wouldn’t be happy, for instance, if I broke a promise to see to it that you kept two promises of roughly equal importance. Now, if you think of the deontologists as trying to satisfy some agent-relative and time-relative goal, you might be able to think of them as just trying to maximize the satisfaction of that goal. (I think this is right.) If you find this issue interesting (I don’t think it is all that interesting personally), googling “Consequentializing Moral Theories” should get you in touch with some of the relevant philosophy.
I’m not sure how complicated it would have to be. You might have some standard of benevolence (how disposed you are to do things that make people happy) and hold that other things being equal, it is better for benevolent people to be happy. True, you’d have to specify a number of parameters here, but it isn’t clear that you’d need enough to make it egregiously complex. (Or, on a variant, you could say how malevolent various past actions are and hold that outcomes are better when malevolent actions are punished to a certain extent.)
Also, I don’t think you can do a great job representing deontological views as trying to minimize the extent to which rules are broken by people in general. The reason has to do with the fact that deontological duties are usually thought to be agent-relative (and time-relative, probably). Deontologists think that I have a special duty to see to it that I don’t break promises in a way that I don’t have a duty to see to it that you don’t break promises. They wouldn’t be happy, for instance, if I broke a promise to see to it that you kept two promises of roughly equal importance. Now, if you think of the deontologists as trying to satisfy some agent-relative and time-relative goal, you might be able to think of them as just trying to maximize the satisfaction of that goal. (I think this is right.) If you find this issue interesting (I don’t think it is all that interesting personally), googling “Consequentializing Moral Theories” should get you in touch with some of the relevant philosophy.