Maybe I should add something clarifying that virtue is not made of one thing. Virtue signals demonstrate particular qualities. You have to be rational and keep track of what signal is evidence of what and think clearly about how that may interact with other beliefs and qualities to lead to different outcomes, like you’re doing here.
Do you have an idea of virtue signal for non-maziness?
Openness, honesty, and transparency are signals for non-maziness.
Historically, there are practices of EA organization to signal those qualities. GiveWell recently decided to stop publishing the audio of their board meetings. That’s stopping sending a virtue signal for non-maziness.
On the CEA side, there are a few bad signals. After promising Guzey confidentiality for his criticism of William MacAskill, CEA community manager send the criticism document to MacAskill in violation of the promise. Afterward, CEA’s position seemed to be that saying “sorry, we won’t break confidentiality promises again” deep in the comments of a thread is enough. No need to speak in Our Mistakes about violating their confidentiality promises, no personal consequences for violating the promises, and no sense that they incurred a debt toward Guzey for which they have to do something to make it right.
CEA published images on their website from an EA global where there was a Leverage Research table and edited the images to remove the name of Leverage Research from the image. Image editing like that is a signal for dishonesty.
Given CEA’s status in the EA ecosystem openly speaking about either of those incidents has the potential to be socially costly. For anyone who cares about their standing in EA circles talking about those things would be a signal for non-maziness.
Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That’s qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.
If CEA’s leadership engages in a bunch of costly prosocial signals like being vegans that’s not enough when you decide whether or not to trust them to keep confidentiality promises in the future given the value they put on past promises.
In general, I don’t fully agree with rationalist culture about what is demanded by honesty. Like that Leverage example doesn’t sound obviously bad to me— maybe they just don’t want to promote Leverage or confuse anyone about their position on Leverage instead of creating a historical record, as you seem to take to be the only legitimate goal? (Unless you mean the most recent EA Global in which case that would seem more like a cover-up.)
The advantage of pre-commitment virtue signals is that you don’t have to interpret them through the lens of your values to know whether the person fulfilled them or not. Most virtue signals depend on whether you agree the thing is a virtue, though, and when you have a very specific flavor of a virtue like honesty then that becomes ingroup v neargroup-defining.
Honesty isn’t just a virtue. When it comes to trusting people, signals of honesty mean that you can take what someone is saying at face value. It allows you to trust people to not mislead you. This is why focusing on whether signals are virtuous, can be misleading when you want to make decisions about trusting.
Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that’s a signal.
Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that’s a signal.
I don’t know this full situation or what I would conclude about it but I don’t think your interpretation is QED on its face. Like I said, I feel like it is potentially more dishonest or misleading to seem to endorse Leverage. Idk why they didn’t just not post the pictures at all, which seems the least potentially confusing or deceptive, but the fact that they didn’t doesn’t lead me to conclude dishonesty without knowing more.
I actually think LWers tend toward the bad kind of virtue signaling with honesty, and they tend to define honesty as not doing themselves any favors with communication. (Makes sense considering Hanson’s foundational influence.)
Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That’s qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.
I disagree— I would call social tension a cost. Willingness to risk social tension is not as legible of a signal, though, because it’s harder to track that someone is living up to a pre-commitment.
Whether or not social tension is a cost is besides the point. Costly signals nearly always come with costs.
If you have an enviroment where status is gained by costly signals that are only valued within that group, it drives status competition in a way where the people who are on top likely will chose status over other ends.
That means that organizations are not honest about the impact that they are having but present themselves as creating more impact than they actually produce. It means that when high status organizations inflate their impact people avoid talking about it when it would cost them status.
If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
> If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
How are people supposed to know their moves are socially positive?
Also I’m not saying to make those things the only markers of status. You seem to want to optimize for costly signals of “honesty”, which I worry is being goodharted in this conversation.
Maybe I should add something clarifying that virtue is not made of one thing. Virtue signals demonstrate particular qualities. You have to be rational and keep track of what signal is evidence of what and think clearly about how that may interact with other beliefs and qualities to lead to different outcomes, like you’re doing here.
Do you have an idea of virtue signal for non-maziness?
Openness, honesty, and transparency are signals for non-maziness.
Historically, there are practices of EA organization to signal those qualities. GiveWell recently decided to stop publishing the audio of their board meetings. That’s stopping sending a virtue signal for non-maziness.
On the CEA side, there are a few bad signals. After promising Guzey confidentiality for his criticism of William MacAskill, CEA community manager send the criticism document to MacAskill in violation of the promise. Afterward, CEA’s position seemed to be that saying “sorry, we won’t break confidentiality promises again” deep in the comments of a thread is enough. No need to speak in Our Mistakes about violating their confidentiality promises, no personal consequences for violating the promises, and no sense that they incurred a debt toward Guzey for which they have to do something to make it right.
CEA published images on their website from an EA global where there was a Leverage Research table and edited the images to remove the name of Leverage Research from the image. Image editing like that is a signal for dishonesty.
Given CEA’s status in the EA ecosystem openly speaking about either of those incidents has the potential to be socially costly. For anyone who cares about their standing in EA circles talking about those things would be a signal for non-maziness.
Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That’s qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.
If CEA’s leadership engages in a bunch of costly prosocial signals like being vegans that’s not enough when you decide whether or not to trust them to keep confidentiality promises in the future given the value they put on past promises.
In general, I don’t fully agree with rationalist culture about what is demanded by honesty. Like that Leverage example doesn’t sound obviously bad to me— maybe they just don’t want to promote Leverage or confuse anyone about their position on Leverage instead of creating a historical record, as you seem to take to be the only legitimate goal? (Unless you mean the most recent EA Global in which case that would seem more like a cover-up.)
The advantage of pre-commitment virtue signals is that you don’t have to interpret them through the lens of your values to know whether the person fulfilled them or not. Most virtue signals depend on whether you agree the thing is a virtue, though, and when you have a very specific flavor of a virtue like honesty then that becomes ingroup v neargroup-defining.
Honesty isn’t just a virtue. When it comes to trusting people, signals of honesty mean that you can take what someone is saying at face value. It allows you to trust people to not mislead you. This is why focusing on whether signals are virtuous, can be misleading when you want to make decisions about trusting.
Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that’s a signal.
I don’t know this full situation or what I would conclude about it but I don’t think your interpretation is QED on its face. Like I said, I feel like it is potentially more dishonest or misleading to seem to endorse Leverage. Idk why they didn’t just not post the pictures at all, which seems the least potentially confusing or deceptive, but the fact that they didn’t doesn’t lead me to conclude dishonesty without knowing more.
I actually think LWers tend toward the bad kind of virtue signaling with honesty, and they tend to define honesty as not doing themselves any favors with communication. (Makes sense considering Hanson’s foundational influence.)
I disagree— I would call social tension a cost. Willingness to risk social tension is not as legible of a signal, though, because it’s harder to track that someone is living up to a pre-commitment.
Whether or not social tension is a cost is besides the point. Costly signals nearly always come with costs.
If you have an enviroment where status is gained by costly signals that are only valued within that group, it drives status competition in a way where the people who are on top likely will chose status over other ends.
That means that organizations are not honest about the impact that they are having but present themselves as creating more impact than they actually produce. It means that when high status organizations inflate their impact people avoid talking about it when it would cost them status.
If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
> If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
How are people supposed to know their moves are socially positive?
Also I’m not saying to make those things the only markers of status. You seem to want to optimize for costly signals of “honesty”, which I worry is being goodharted in this conversation.