tl;dr: seems like you need some story for what values a group highly regards / rewards. If those are just the values that serve the group, this doesn’t sound very distinct from “groups try to enforce norms which benefit the group, e.g. public goods provision” + “those norms are partially successful, though people additionally misrepresent the extent to which they e.g. contribute to public goods.”
Similarly, larger countries do not have higher ODA as the public goods model predicts
Calling this the “public goods model” still seems backwards. “Larger countries have higher ODA” is a prediction of “the point of ODA is to satisfy the donor’s consequentialist altruistic preferences.”
The “public goods model” is an attempt to model the kind of moral norms / rhetoric / pressures / etc. that seem non-consequentialist. It suggests that such norms function in part to coordinate the provision of public goods, rather than as a direct expression of individual altruistic preferences. (Individual altruistic preferences will sometimes be why something is a public good.)
This system probably evolved to “solve” local problems like local public goods and fairness within the local community, but has been co-opted by larger-scale moral memeplexes.
I agree that there are likely to be failures of this system (viewed teleologically as a mechanism for public goods provision or conflict resolution) and that “moral norms are reliably oriented towards provide public goods” is less good than “moral norms are vaguely oriented towards providing public goods.” Overall the situation seems similar to a teleological view of humans.
For example if global anti-poverty suddenly becomes much more cost effective, one doesn’t vote or donate to spend more on global poverty, because the budget allocated to that faction hasn’t changed.
I agree with this, but it seems orthogonal to the “public goods model,” this is just about how people or groups aggregate across different values. I think it’s pretty obvious in the case of imperfectly-coordinated groups (who can’t make commitments to have their resource shares change as beliefs about relative efficacy change), and I think it also seems right in the case of imperfectly-internally-coordinated people.
(We have preference alteration because preference falsification is cognitively costly, and we have preference falsification because preference alteration is costly in terms of physical resources.)
E.g., people overcompensate for private deviations from moral norms by putting lots of effort into public signaling including punishing norm violators and non-punishers, causing even more preference alteration and falsification by others.
I don’t immediately see why this would be “compensation,” it seems like public signaling of virtue would always be a good idea regardless of your private behavior. Indeed, it probably becomes a better idea as your private behavior is more virtuous (in economics you’d only call the behavior “signaling” to the extent that this is true).
As a general point, I think calling this “signaling” is kind of misleading. For example, when I follow the law, in part I’m “signaling” that I’m law-abiding, but to a significant extent I’m also just responding to incentives to follow the law which are imposed because other people want me to follow the law. That kind of thing is not normally called signaling. I think many of the places you are currently saying “virtue signaling” have significant non-signaling components.
I don’t immediately see why this would be “compensation,” it seems like public signaling of virtue would always be a good idea regardless of your private behavior.
I didn’t have a clear model in mind when I wrote that, and just wrote down “overcompensate” by intuition. Thinking more about it, I think a model that makes sense here is to assume that your private actions can be audited by others at some cost (think of Red Guards going into people’s homes to look for books, diaries, assets, etc., to root out “counter-revolutionaries”), so if you have something to hide you’d want to avoid getting audited by avoiding suspicion, and one way to do that is to put extra effort into public displays of virtue. People whose private actions are virtuous would not have this extra incentive.
As a general point, I think calling this “signaling” is kind of misleading.
I guess I’ve been using “virtue signaling” because it’s an established term that seems to be referring to the same kind of behavior that I’m talking about. But I acknowledge that the way I’m modeling it doesn’t really match the concept of “signaling” from economics, and I’m open to suggestions for a better term. (I’ll also just think about how to reword my text to avoid this confusion.)
If those are just the values that serve the group, this doesn’t sound very distinct from “groups try to enforce norms which benefit the group, e.g. public goods provision” + “those norms are partially successful, though people additionally misrepresent the extent to which they e.g. contribute to public goods.”
It’s entirely possible that I misunderstood or missed some of the points of your Moral public goods post and then reinvented the same ideas you were trying to convey. By “public goods model” I meant something like “where we see low levels of redistribution and not much coordination over redistribution, that is best explained by people preferring a world with higher level of redistribution but failing to coordinate, instead of by people just not caring about others.” I was getting this by generalizing from your opening example:
The nobles are altruistic enough that they prefer it if everyone gives to the peasants, but it’s still not worth it for any given noble to contribute anything to the collective project.
Your sections 1 and 2 also seemed to be talking about this. So this is what my “alternative model” was in reaction to. The “alternative model” says that where we see low levels of redistribution (to some target class), it’s because people don’t care much about the target class of redistribution and assign the relevant internal moral faction a small budget, and this is mostly because caring about the target class is not socially rewarded.
Your section 3 may be saying something similar to what I’m saying, but I have to admit I don’t really understand it (perhaps I should have tried to get clarification earlier but I thought I understood what the rest of the post was saying and could just respond to that). Do you think you were trying to make any points that have not been reinvented/incorporated into my model? If so please explain what they were, or perhaps do a more detailed breakdown of your preferred model, in a way that would be easier to compare with my “alternative model”?
seems like you need some story for what values a group highly regards / rewards
I think it depends on a lot of things so it’s hard to give a full story, but if we consider for example the question of “why is concern about ‘social justice’ across identity groups currently so much more highly regarded/rewarded than concerns about ‘social justice’ across social classes” the answer seems to be that a certain moral memeplex happened to be popular in some part of academia and then spread from there due to being “at the right place at the right time” to take over from other decaying moral memeplexes like religion, communism, and liberalism. (ETA: This isn’t necessarily the right explanation, my point is just that it seems necessary to give an explanation that is highly historically contingent.)
(I’ll probably respond to the rest of your comment after I get clarification on the above.)
I don’t think that it’s just social justice across identity groups being at the right place at the right time. As a meme it has the advantage that it allows people who are already powerful enough to effect social structures to argue why they should have more power. That’s a lot harder for social justice across social classes.
tl;dr: seems like you need some story for what values a group highly regards / rewards. If those are just the values that serve the group, this doesn’t sound very distinct from “groups try to enforce norms which benefit the group, e.g. public goods provision” + “those norms are partially successful, though people additionally misrepresent the extent to which they e.g. contribute to public goods.”
Calling this the “public goods model” still seems backwards. “Larger countries have higher ODA” is a prediction of “the point of ODA is to satisfy the donor’s consequentialist altruistic preferences.”
The “public goods model” is an attempt to model the kind of moral norms / rhetoric / pressures / etc. that seem non-consequentialist. It suggests that such norms function in part to coordinate the provision of public goods, rather than as a direct expression of individual altruistic preferences. (Individual altruistic preferences will sometimes be why something is a public good.)
I agree that there are likely to be failures of this system (viewed teleologically as a mechanism for public goods provision or conflict resolution) and that “moral norms are reliably oriented towards provide public goods” is less good than “moral norms are vaguely oriented towards providing public goods.” Overall the situation seems similar to a teleological view of humans.
I agree with this, but it seems orthogonal to the “public goods model,” this is just about how people or groups aggregate across different values. I think it’s pretty obvious in the case of imperfectly-coordinated groups (who can’t make commitments to have their resource shares change as beliefs about relative efficacy change), and I think it also seems right in the case of imperfectly-internally-coordinated people.
Relevant links: if we can’t lie to others, we will lie to ourselves, the monkey and the machine.
I don’t immediately see why this would be “compensation,” it seems like public signaling of virtue would always be a good idea regardless of your private behavior. Indeed, it probably becomes a better idea as your private behavior is more virtuous (in economics you’d only call the behavior “signaling” to the extent that this is true).
As a general point, I think calling this “signaling” is kind of misleading. For example, when I follow the law, in part I’m “signaling” that I’m law-abiding, but to a significant extent I’m also just responding to incentives to follow the law which are imposed because other people want me to follow the law. That kind of thing is not normally called signaling. I think many of the places you are currently saying “virtue signaling” have significant non-signaling components.
I didn’t have a clear model in mind when I wrote that, and just wrote down “overcompensate” by intuition. Thinking more about it, I think a model that makes sense here is to assume that your private actions can be audited by others at some cost (think of Red Guards going into people’s homes to look for books, diaries, assets, etc., to root out “counter-revolutionaries”), so if you have something to hide you’d want to avoid getting audited by avoiding suspicion, and one way to do that is to put extra effort into public displays of virtue. People whose private actions are virtuous would not have this extra incentive.
I guess I’ve been using “virtue signaling” because it’s an established term that seems to be referring to the same kind of behavior that I’m talking about. But I acknowledge that the way I’m modeling it doesn’t really match the concept of “signaling” from economics, and I’m open to suggestions for a better term. (I’ll also just think about how to reword my text to avoid this confusion.)
It’s entirely possible that I misunderstood or missed some of the points of your Moral public goods post and then reinvented the same ideas you were trying to convey. By “public goods model” I meant something like “where we see low levels of redistribution and not much coordination over redistribution, that is best explained by people preferring a world with higher level of redistribution but failing to coordinate, instead of by people just not caring about others.” I was getting this by generalizing from your opening example:
Your sections 1 and 2 also seemed to be talking about this. So this is what my “alternative model” was in reaction to. The “alternative model” says that where we see low levels of redistribution (to some target class), it’s because people don’t care much about the target class of redistribution and assign the relevant internal moral faction a small budget, and this is mostly because caring about the target class is not socially rewarded.
Your section 3 may be saying something similar to what I’m saying, but I have to admit I don’t really understand it (perhaps I should have tried to get clarification earlier but I thought I understood what the rest of the post was saying and could just respond to that). Do you think you were trying to make any points that have not been reinvented/incorporated into my model? If so please explain what they were, or perhaps do a more detailed breakdown of your preferred model, in a way that would be easier to compare with my “alternative model”?
I think it depends on a lot of things so it’s hard to give a full story, but if we consider for example the question of “why is concern about ‘social justice’ across identity groups currently so much more highly regarded/rewarded than concerns about ‘social justice’ across social classes” the answer seems to be that a certain moral memeplex happened to be popular in some part of academia and then spread from there due to being “at the right place at the right time” to take over from other decaying moral memeplexes like religion, communism, and liberalism. (ETA: This isn’t necessarily the right explanation, my point is just that it seems necessary to give an explanation that is highly historically contingent.)
(I’ll probably respond to the rest of your comment after I get clarification on the above.)
I don’t think that it’s just social justice across identity groups being at the right place at the right time. As a meme it has the advantage that it allows people who are already powerful enough to effect social structures to argue why they should have more power. That’s a lot harder for social justice across social classes.