One of the big problem with virtue signaling is that it suggests a one-dimensional scale when we care about multiple different aspects of the character of other people.
We care about people willing to sacrifice for the good of others. We care about people being honest to themselves. We care about people being honest to others. We care about people willing to change their mind when presented with compelling evidence.
If we discuss the example of the Buddhist monk, one of the core Buddhists beliefs is that life is suffering and that it’s valuable to end that suffering. In the longtermist context that means that a Buddhist who sincerely follows those ideals might decide to let AGI end all life on earth. In this context the veganism of the Buddhist monk is a signal that they are willing to act according to their beliefs which makes the person more dangerous not less dangerous.
Historically, Hitler fits also into the pattern of a vegetarian who’s very committed to living out ideological commitments. Hitler was the kind of person willing to make huge sacrifices for his ideological commitments.
It’s valuable to remove animal suffering. You can argue on the basis that it’s valuable that veganism is virtuous, but that doesn’t allow you to make the important predictions about the future actions of those people.
If you decide to hire someone for your organization, you have to make a decision whether or not you want to trust them. To make that decision well you have to think about the characteristics that are actually important for your hiring decision. If they are generally skilled, I would advocate in the EA context that the most important characteristics are those that prevent maziness instead of general willingness to personal sacrifice for the greater good along lines like animal welfare, the enviroment or personal donations.
Maybe I should add something clarifying that virtue is not made of one thing. Virtue signals demonstrate particular qualities. You have to be rational and keep track of what signal is evidence of what and think clearly about how that may interact with other beliefs and qualities to lead to different outcomes, like you’re doing here.
Do you have an idea of virtue signal for non-maziness?
Openness, honesty, and transparency are signals for non-maziness.
Historically, there are practices of EA organization to signal those qualities. GiveWell recently decided to stop publishing the audio of their board meetings. That’s stopping sending a virtue signal for non-maziness.
On the CEA side, there are a few bad signals. After promising Guzey confidentiality for his criticism of William MacAskill, CEA community manager send the criticism document to MacAskill in violation of the promise. Afterward, CEA’s position seemed to be that saying “sorry, we won’t break confidentiality promises again” deep in the comments of a thread is enough. No need to speak in Our Mistakes about violating their confidentiality promises, no personal consequences for violating the promises, and no sense that they incurred a debt toward Guzey for which they have to do something to make it right.
CEA published images on their website from an EA global where there was a Leverage Research table and edited the images to remove the name of Leverage Research from the image. Image editing like that is a signal for dishonesty.
Given CEA’s status in the EA ecosystem openly speaking about either of those incidents has the potential to be socially costly. For anyone who cares about their standing in EA circles talking about those things would be a signal for non-maziness.
Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That’s qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.
If CEA’s leadership engages in a bunch of costly prosocial signals like being vegans that’s not enough when you decide whether or not to trust them to keep confidentiality promises in the future given the value they put on past promises.
In general, I don’t fully agree with rationalist culture about what is demanded by honesty. Like that Leverage example doesn’t sound obviously bad to me— maybe they just don’t want to promote Leverage or confuse anyone about their position on Leverage instead of creating a historical record, as you seem to take to be the only legitimate goal? (Unless you mean the most recent EA Global in which case that would seem more like a cover-up.)
The advantage of pre-commitment virtue signals is that you don’t have to interpret them through the lens of your values to know whether the person fulfilled them or not. Most virtue signals depend on whether you agree the thing is a virtue, though, and when you have a very specific flavor of a virtue like honesty then that becomes ingroup v neargroup-defining.
Honesty isn’t just a virtue. When it comes to trusting people, signals of honesty mean that you can take what someone is saying at face value. It allows you to trust people to not mislead you. This is why focusing on whether signals are virtuous, can be misleading when you want to make decisions about trusting.
Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that’s a signal.
Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that’s a signal.
I don’t know this full situation or what I would conclude about it but I don’t think your interpretation is QED on its face. Like I said, I feel like it is potentially more dishonest or misleading to seem to endorse Leverage. Idk why they didn’t just not post the pictures at all, which seems the least potentially confusing or deceptive, but the fact that they didn’t doesn’t lead me to conclude dishonesty without knowing more.
I actually think LWers tend toward the bad kind of virtue signaling with honesty, and they tend to define honesty as not doing themselves any favors with communication. (Makes sense considering Hanson’s foundational influence.)
Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That’s qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.
I disagree— I would call social tension a cost. Willingness to risk social tension is not as legible of a signal, though, because it’s harder to track that someone is living up to a pre-commitment.
Whether or not social tension is a cost is besides the point. Costly signals nearly always come with costs.
If you have an enviroment where status is gained by costly signals that are only valued within that group, it drives status competition in a way where the people who are on top likely will chose status over other ends.
That means that organizations are not honest about the impact that they are having but present themselves as creating more impact than they actually produce. It means that when high status organizations inflate their impact people avoid talking about it when it would cost them status.
If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
> If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
How are people supposed to know their moves are socially positive?
Also I’m not saying to make those things the only markers of status. You seem to want to optimize for costly signals of “honesty”, which I worry is being goodharted in this conversation.
One of the big problem with virtue signaling is that it suggests a one-dimensional scale when we care about multiple different aspects of the character of other people.
We care about people willing to sacrifice for the good of others. We care about people being honest to themselves. We care about people being honest to others. We care about people willing to change their mind when presented with compelling evidence.
If we discuss the example of the Buddhist monk, one of the core Buddhists beliefs is that life is suffering and that it’s valuable to end that suffering. In the longtermist context that means that a Buddhist who sincerely follows those ideals might decide to let AGI end all life on earth. In this context the veganism of the Buddhist monk is a signal that they are willing to act according to their beliefs which makes the person more dangerous not less dangerous.
Historically, Hitler fits also into the pattern of a vegetarian who’s very committed to living out ideological commitments. Hitler was the kind of person willing to make huge sacrifices for his ideological commitments.
It’s valuable to remove animal suffering. You can argue on the basis that it’s valuable that veganism is virtuous, but that doesn’t allow you to make the important predictions about the future actions of those people.
If you decide to hire someone for your organization, you have to make a decision whether or not you want to trust them. To make that decision well you have to think about the characteristics that are actually important for your hiring decision. If they are generally skilled, I would advocate in the EA context that the most important characteristics are those that prevent maziness instead of general willingness to personal sacrifice for the greater good along lines like animal welfare, the enviroment or personal donations.
Maybe I should add something clarifying that virtue is not made of one thing. Virtue signals demonstrate particular qualities. You have to be rational and keep track of what signal is evidence of what and think clearly about how that may interact with other beliefs and qualities to lead to different outcomes, like you’re doing here.
Do you have an idea of virtue signal for non-maziness?
Openness, honesty, and transparency are signals for non-maziness.
Historically, there are practices of EA organization to signal those qualities. GiveWell recently decided to stop publishing the audio of their board meetings. That’s stopping sending a virtue signal for non-maziness.
On the CEA side, there are a few bad signals. After promising Guzey confidentiality for his criticism of William MacAskill, CEA community manager send the criticism document to MacAskill in violation of the promise. Afterward, CEA’s position seemed to be that saying “sorry, we won’t break confidentiality promises again” deep in the comments of a thread is enough. No need to speak in Our Mistakes about violating their confidentiality promises, no personal consequences for violating the promises, and no sense that they incurred a debt toward Guzey for which they have to do something to make it right.
CEA published images on their website from an EA global where there was a Leverage Research table and edited the images to remove the name of Leverage Research from the image. Image editing like that is a signal for dishonesty.
Given CEA’s status in the EA ecosystem openly speaking about either of those incidents has the potential to be socially costly. For anyone who cares about their standing in EA circles talking about those things would be a signal for non-maziness.
Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That’s qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.
If CEA’s leadership engages in a bunch of costly prosocial signals like being vegans that’s not enough when you decide whether or not to trust them to keep confidentiality promises in the future given the value they put on past promises.
In general, I don’t fully agree with rationalist culture about what is demanded by honesty. Like that Leverage example doesn’t sound obviously bad to me— maybe they just don’t want to promote Leverage or confuse anyone about their position on Leverage instead of creating a historical record, as you seem to take to be the only legitimate goal? (Unless you mean the most recent EA Global in which case that would seem more like a cover-up.)
The advantage of pre-commitment virtue signals is that you don’t have to interpret them through the lens of your values to know whether the person fulfilled them or not. Most virtue signals depend on whether you agree the thing is a virtue, though, and when you have a very specific flavor of a virtue like honesty then that becomes ingroup v neargroup-defining.
Honesty isn’t just a virtue. When it comes to trusting people, signals of honesty mean that you can take what someone is saying at face value. It allows you to trust people to not mislead you. This is why focusing on whether signals are virtuous, can be misleading when you want to make decisions about trusting.
Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that’s a signal.
I don’t know this full situation or what I would conclude about it but I don’t think your interpretation is QED on its face. Like I said, I feel like it is potentially more dishonest or misleading to seem to endorse Leverage. Idk why they didn’t just not post the pictures at all, which seems the least potentially confusing or deceptive, but the fact that they didn’t doesn’t lead me to conclude dishonesty without knowing more.
I actually think LWers tend toward the bad kind of virtue signaling with honesty, and they tend to define honesty as not doing themselves any favors with communication. (Makes sense considering Hanson’s foundational influence.)
I disagree— I would call social tension a cost. Willingness to risk social tension is not as legible of a signal, though, because it’s harder to track that someone is living up to a pre-commitment.
Whether or not social tension is a cost is besides the point. Costly signals nearly always come with costs.
If you have an enviroment where status is gained by costly signals that are only valued within that group, it drives status competition in a way where the people who are on top likely will chose status over other ends.
That means that organizations are not honest about the impact that they are having but present themselves as creating more impact than they actually produce. It means that when high status organizations inflate their impact people avoid talking about it when it would cost them status.
If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
> If people optimize to gain status by donating and being vegan, you can’t trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.
How are people supposed to know their moves are socially positive?
Also I’m not saying to make those things the only markers of status. You seem to want to optimize for costly signals of “honesty”, which I worry is being goodharted in this conversation.