While I don’t disagree with the object-level point of this post, I generally think things of the form “We should all condemn X!” belong on social media, not on LessWrong.
“Let’s all condemn X” is a purely political topic for most values of X. This post in particular is worded in a way which gives a very strong vibe of encouraging groupthink, and of encouraging soldier-mindset (i.e. the counterpart to scout mindset), and of encouraging people to play simulacrum level 3+ games rather than focus on physical reality. In short, it is exactly the sort of thing which I do not want on LessWrong, even when I agree with the goals it’s ultimately trying to achieve.
I’m not sure what your view is on the utility of LessWrong as a medium, but I think it’s primarily useful as a way to build common knowledge, share information, and coordinate as a community. In that respect, I think this is an extremely important thing for us to coordinate and build common knowledge on.
LW’s stated mission, IIRC, is roughly “accelerate intellectual progress on the important problems facing humanity”, and I think that’s a basically-accurate description of the value LessWrong provides. The primary utility of LessWrong is in cultural norms and a user base conducive to that mission.
For example, comment boxes on every frontpage post have these guidelines:
Aim to explain, not persuade
Try to offer concrete models and predictions
If you disagree, try getting curious about what your partner is thinking
Don’t be afraid to say ‘oops’ and change your mind
LessWrong’s primary utility is a culture which makes things like that part of a natural, typical communication style.
I would say that a core part of that culture is to generally try to stay on low simulacrum levels—talk literally and directly about our actual models of the world, and mostly not choose our words as moves in a social game. Insofar as simulacrum level 3 is a coordination strategy, that means certain kinds of coordination need to happen somewhere else besides LessWrong. And at current margins, that’s a very worthwhile tradeoff! By default, humans tend to turn every available communication channel into a coordination battleground, so there are few spaces out there which stay at low simulacrum levels, and the marginal value of such spaces is therefore quite high. Thus the value of LessWrong: it’s primarily a forum for intellectual progress, i.e. improving our own understanding, not a forum for political coordination.
While it’s true that simulacrum level 3 is a coordination strategy, I feel that we should be able to build a community that can coordinate while staying on simulacrum level 1. This means we’re allowed to say things like, “I publicly commit to following a system of norms where [something, e.g. using fraud for EA funding] is prohibited”. That is, replace “playing a game while pretending to talk about facts” to “playing a game while being very explicit about the rules and the moves”. Maybe Evan’s choice of language was suboptimal in some ways, but there needs to be some way to say it that doesn’t have to be banished to social media. Among other reasons, I don’t want to rely on social media for anything and personally I don’t use or follow social media at all (I don’t even have an account on anything except linkedin).
Yes, and to expand only slightly: Coordinating against dishonest agents or practices is an extremely important part of coordination in general; if you cannot agree on removing dishonest agents or practices from your own group, the group will likely be worse at accomplishing goals; groups that cannot remove dishonest instances will be correctly distrusted by other groups and individuals.
All of these are important and worth coordinating on, which I think sometimes means “Let’s condemn X” makes sense even though the outside view suggests that many instances of “Let’s condemn X” are bad. Some inside view is allowed.
if you cannot agree on removing dishonest agents or practices from your own group
What group, though? I’m not aware of Sam Bankman-Fried having posted on Less Wrong (a website for hosting blog posts on the subject matter of human rationality). If he did write misleading posts or comments on this website, we should definitely downvote them! If he didn’t, why is this our problem?
(That is to say less rhetorically, why should this be our problem? Why can’t we just be a website where anyone can post articles about probability theory or cognitive biases, rather than an enforcement arm of the branded “EA” movement, accountable for all its sins?)
because it is branded with the ea movement by being lesswrong.com. it cannot be unbranded except by changing the associations people actually make. the true position in the latent network relationships online makes it associated. you may not be aware of your position in a larger organism, but that doesn’t mean you aren’t in one just because you only want to focus on the contents of your own cell; if you insist on not thinking about the larger organisms you participate in then that’s alright, but it makes you a skin cell, not a nerve cell.
edit: I suppose a basic underlying viewpoint I have is that all signaling is done by taking actions, and the only actions worth taking are ones that send signals into the universe that shape the universe towards the forms you wish it to have. lifting something off the ground is signaling, and signaling is measured in watts. false signals are lying, don’t do those, they’re worse than useless—putting map signals into another brain that do not match the signals you’re sending into the territory is dishonesty, and the false signals themselves are the thing which is under question which need to be repaired into honesty by example.
because it is branded with the ea movement by being lesswrong.com.
What does the name “lesswrong” have to do with EA? There’s a certain overlap between the two communities, but LessWrong’s mission has nothing to do with EA specifically. To the extent that it has any mission other than the one on its face, raising the sanity waterline, then historically that mission — Eliezer’s mission — was to get people to think properly about AI and avert the coming doom.
FWIW, I am not and never have been an EA and do not read or participate in EA forums, but I’ve been on LW since it began on OvercomingBias. If it became “an enforcement arm of the branded “EA” movement, accountable for all its sins” I would leave.
I agree that there is value in common-knowledge building, but there is a difference between doing something that feels social-miasma or simulacrum-level 3 shaped where you assert that “WE ALL AGREE THAT X” vs. you argue that something is a good idea, and you currently believe lots of other people believe the same.
I think coordinating against dishonest practices is important, but I don’t think in order to do that we have to move away from just making primarily factual statements or describing your own belief state, and have to invent some kind of group-level belief.
Where do you think I make any claims that “everyone agrees X” as opposed to “I think X”? In fact, rereading my own writing, I think I was quite clear that everything therein was my view and my view alone.
We must be very clear: fraud in the service of effective altruism is unacceptable
There is no “I think” here, no “I believe”. At least to me it feels very much like a warcry instead of a statement about the world.
to make clear that we don’t support fraud in the service of effective altruism.
This is also a call to action to change some kind of collective belief. I agree that you might have meant “we individually don’t support fraud”, but the “in the service of effective altruism” gives me a sense of this being a reference to a collective belief of effective altruism.
I do agree you have overall been pretty clear, and I appreciate the degree to which you ground things in your personal beliefs, but I do think the title as well as the central call to action of the post goes against that.
I agree that the title does directly assert a claim without attribution, and that it could be misinterpreted as a claim about what all EAs think should be done rather than just what I think should be done. It’s a bit tricky because I want the title to be very clear, but am quite limited in the words I have available there.
I think the latter quote is pretty disingenuous—if you quote the rest of that sentence, the beginning is “I think the best course of action is”, which makes it very clear that this is a claim about what I personally believe people should do:
Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don’t support fraud in the service of effective altruism.
To be clear, “in the service of effective altruism” there is meant to refer to fraud done for the purpose of advancing effective altruism, not that we have an obligation to not support fraud and that obligation is in the service of effective altruism.
Edit: To make that last point more clear, I chainged “to make clear that we don’t support fraud in the service of effective altruism” to “to make clear that we don’t support fraud done in the service of effective altruism”.
I still get a strong feeling of group think every time I see the title of the post, and feel a strong sense of something invading into my thought-space in a way that feels toxic to me. For some reason this feels even stronger in the Twitter post you made:
I don’t know, I just feel like this is some kind of call-to-action that is trying to bypass my epistemic defenses.
The Twitter post is literally just title + link. I don’t like Twitter, and don’t want to engage on it, but I figured posting this more publicly would be helpful, so I did the minimum thing to try to direct people to this post.
From my perspective, I find it pretty difficult to be criticized for a “feeling” that you get from my post that seems to me to be totally disconnected from anything that I actually said.
Yeah, I am sorry. Like, I don’t think I currently have the energy to try to communicate all the subtle things that feel wrong to me about this, but it adds up to something I quite dislike.
I wish I had a more crystallized quick summary that I expect to cross the inferential distance quickly, but I don’t currently.
FWIW when I first saw the title (on the EA Forum) my reaction was to interpret it with an implicit “[I think that] We must be very clear: fraud in the service of effective altruism is unacceptable”.
Things generally don’t just become true because people assert them to be—surely people on LW know that. I think habryka’s concern that not including “I think” in the title is a big deal is overblown. Dropping “I think” from the title is reasonable IMO to make the title more concise; I don’t anticipate it degrading the culture of LW. I also don’t see how it “bypasses epistemic defenses.” If the lack of inclusion of an “I think” in your title will worsen readers’ epistemics, then those readers seem to be at great risk of getting terrible epistemics from seeing any news headlines.
I don’t mean to say that there’s not value in using more nuanced language, including “I think” and similar qualifications to be more precise with ones words, just that I think the karma/vote ratio your post received is an over-reaction to concern about posts of your style degrading the level one “Attempt to describe the world accurately” culture of LW.
IDK where habryka is coming from, but to me the post is good, and the title is fine but gives a twinge from the words “We” and “must” and those words together. (Also the phrase “is unacceptable” also is implicitly speaking from a social-collective-objective perspective, if you know what I mean. Which is fine, but it contributes to the twinge.) Things that would, to me, decrease the twinge:
EAs should be....
EA must unambiguously not accept fraud...
That’s a low-character-count way to be a bit more specific about who We is, to whom something Is Unacceptable. It’s maybe not what you really mean, maybe you really mean something more complicated like “people who want to ambitiously do good in the world” or something, and you don’t have a low-character way to say that, and “We” is aspirationally pointing at that.
In the post you clarify
we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it
and say
Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don’t support fraud done in the service of effective altruism.
Which is reasonable. The title though, by touching on the We, seems to me to “make it” a “decision that is the group’s decision”.
it sure is a call to action, your epistemic defenses had better be good enough to figure out that it is a good one, because it is, and it is correct to pressure you about it. the fact that you’re uncertain about whether I am right does not mean that I am uncertain. it is perfectly alright to say you’re not sure if I’m right. but being annoyed at people for saying you should probably come to this conclusion is not reasonable when that conclusion is simply actually objectively justified—instead say you will have to think about it because you aren’t sure if you see the justification yet, or something, and remember that you don’t get to exclude comments from affecting your reputation, ever. if there’s a way you can encode your request for courtesy about required updates that better clarifies that you are in fact willing to make updates that do turn out to be important and critical moral cooperation policy updates, then finding that method of phrasing may in fact be positive expected value for the outside world because it would help people request moral updates of each other in ways that are not pushing too hard. but it is often correct to push. do not expect people to make an exception because the phrasing was too much pressure.
I think the main issue here is that Less Wrong is not Effective Altruism, and that many (at a guess, most) LW members are not affiliated with EA or don’t consider themselves EAs. So from that perspective, while this post makes sense in the EA forum, it makes relatively little sense on LW, and to me looks roughly like being asked to endorse or disavow some politician X. (And if I extend the analogy, it’s inevitably about a US politician even though I live in another country.)
So this specific EA forum post is just a poor fit for reposting on LW without a complete rewrite.
That said, the core sentiment (ethics; fraud is bad; the ends don’t justify the means; etc.) obviously does have a place on LW, so there’s probably a way to write a dispassionate current-events take on e.g. this post from the Sequences.
I wanted to thank you for writing this comment—while I have also been reasonably active on social media about this topic, and playing level 3+ games is sometimes necessary in the real world, I don’t think this post actually offers any substantive content that goes beyond “fraud is bad and FTX was involved in fraudulent activities”.
I agree that it’s not a good fit for LW, though I think the post does fit in the EA Forum given recent events.
Yep, I also agree on the object level, but if the proposal is “we should collectively communicate something to public”, then we should probably also get some feedback from people who have non-zero experience with communicating to public. Not about the message, but about its form.
For example, when I see people saying things like “We must all collectively condemn X”, I take it as evidence that many people support X… otherwise there would be no need to go hysterical, right? If it was just one person, you might simply say: “hey, John Doe is not one of us, do not listen to him if he speaks in our name”.
So in situations like this, we need to avoid not just lying, but also telling the truth in a way that predictably leads people to an opposite conclusion. (“They said X. In this business, when someone says X, they actually mean Y. Therefore, Y.”) Speaking for myself, I have no idea how to do it, because I have zero expertise in this area. When someone proposes a communication strategy, I would like to know what is their experise.
Of course, speaking for themselves, anyone is free to say anything. But for speaking in the name of a community, it would be nice to know the rules for “speaking in the name of a community” before doing so. There are such things as protesting too much. There are such things as creating associations; you keep saying “X is not Y”, and people remember “X is… uhm… somehow associated with Y”.
While I don’t disagree with the object-level point of this post, I generally think things of the form “We should all condemn X!” belong on social media, not on LessWrong.
“Let’s all condemn X” is a purely political topic for most values of X. This post in particular is worded in a way which gives a very strong vibe of encouraging groupthink, and of encouraging soldier-mindset (i.e. the counterpart to scout mindset), and of encouraging people to play simulacrum level 3+ games rather than focus on physical reality. In short, it is exactly the sort of thing which I do not want on LessWrong, even when I agree with the goals it’s ultimately trying to achieve.
Strong downvoted.
I’m not sure what your view is on the utility of LessWrong as a medium, but I think it’s primarily useful as a way to build common knowledge, share information, and coordinate as a community. In that respect, I think this is an extremely important thing for us to coordinate and build common knowledge on.
LW’s stated mission, IIRC, is roughly “accelerate intellectual progress on the important problems facing humanity”, and I think that’s a basically-accurate description of the value LessWrong provides. The primary utility of LessWrong is in cultural norms and a user base conducive to that mission.
For example, comment boxes on every frontpage post have these guidelines:
LessWrong’s primary utility is a culture which makes things like that part of a natural, typical communication style.
I would say that a core part of that culture is to generally try to stay on low simulacrum levels—talk literally and directly about our actual models of the world, and mostly not choose our words as moves in a social game. Insofar as simulacrum level 3 is a coordination strategy, that means certain kinds of coordination need to happen somewhere else besides LessWrong. And at current margins, that’s a very worthwhile tradeoff! By default, humans tend to turn every available communication channel into a coordination battleground, so there are few spaces out there which stay at low simulacrum levels, and the marginal value of such spaces is therefore quite high. Thus the value of LessWrong: it’s primarily a forum for intellectual progress, i.e. improving our own understanding, not a forum for political coordination.
While it’s true that simulacrum level 3 is a coordination strategy, I feel that we should be able to build a community that can coordinate while staying on simulacrum level 1. This means we’re allowed to say things like, “I publicly commit to following a system of norms where [something, e.g. using fraud for EA funding] is prohibited”. That is, replace “playing a game while pretending to talk about facts” to “playing a game while being very explicit about the rules and the moves”. Maybe Evan’s choice of language was suboptimal in some ways, but there needs to be some way to say it that doesn’t have to be banished to social media. Among other reasons, I don’t want to rely on social media for anything and personally I don’t use or follow social media at all (I don’t even have an account on anything except linkedin).
Yes, and to expand only slightly: Coordinating against dishonest agents or practices is an extremely important part of coordination in general; if you cannot agree on removing dishonest agents or practices from your own group, the group will likely be worse at accomplishing goals; groups that cannot remove dishonest instances will be correctly distrusted by other groups and individuals.
All of these are important and worth coordinating on, which I think sometimes means “Let’s condemn X” makes sense even though the outside view suggests that many instances of “Let’s condemn X” are bad. Some inside view is allowed.
What group, though? I’m not aware of Sam Bankman-Fried having posted on Less Wrong (a website for hosting blog posts on the subject matter of human rationality). If he did write misleading posts or comments on this website, we should definitely downvote them! If he didn’t, why is this our problem?
(That is to say less rhetorically, why should this be our problem? Why can’t we just be a website where anyone can post articles about probability theory or cognitive biases, rather than an enforcement arm of the branded “EA” movement, accountable for all its sins?)
because it is branded with the ea movement by being lesswrong.com. it cannot be unbranded except by changing the associations people actually make. the true position in the latent network relationships online makes it associated. you may not be aware of your position in a larger organism, but that doesn’t mean you aren’t in one just because you only want to focus on the contents of your own cell; if you insist on not thinking about the larger organisms you participate in then that’s alright, but it makes you a skin cell, not a nerve cell.
edit: I suppose a basic underlying viewpoint I have is that all signaling is done by taking actions, and the only actions worth taking are ones that send signals into the universe that shape the universe towards the forms you wish it to have. lifting something off the ground is signaling, and signaling is measured in watts. false signals are lying, don’t do those, they’re worse than useless—putting map signals into another brain that do not match the signals you’re sending into the territory is dishonesty, and the false signals themselves are the thing which is under question which need to be repaired into honesty by example.
What does the name “lesswrong” have to do with EA? There’s a certain overlap between the two communities, but LessWrong’s mission has nothing to do with EA specifically. To the extent that it has any mission other than the one on its face, raising the sanity waterline, then historically that mission — Eliezer’s mission — was to get people to think properly about AI and avert the coming doom.
FWIW, I am not and never have been an EA and do not read or participate in EA forums, but I’ve been on LW since it began on OvercomingBias. If it became “an enforcement arm of the branded “EA” movement, accountable for all its sins” I would leave.
I agree that there is value in common-knowledge building, but there is a difference between doing something that feels social-miasma or simulacrum-level 3 shaped where you assert that “WE ALL AGREE THAT X” vs. you argue that something is a good idea, and you currently believe lots of other people believe the same.
I think coordinating against dishonest practices is important, but I don’t think in order to do that we have to move away from just making primarily factual statements or describing your own belief state, and have to invent some kind of group-level belief.
Where do you think I make any claims that “everyone agrees X” as opposed to “I think X”? In fact, rereading my own writing, I think I was quite clear that everything therein was my view and my view alone.
I think the title is the biggest problem here:
There is no “I think” here, no “I believe”. At least to me it feels very much like a warcry instead of a statement about the world.
This is also a call to action to change some kind of collective belief. I agree that you might have meant “we individually don’t support fraud”, but the “in the service of effective altruism” gives me a sense of this being a reference to a collective belief of effective altruism.
I do agree you have overall been pretty clear, and I appreciate the degree to which you ground things in your personal beliefs, but I do think the title as well as the central call to action of the post goes against that.
I agree that the title does directly assert a claim without attribution, and that it could be misinterpreted as a claim about what all EAs think should be done rather than just what I think should be done. It’s a bit tricky because I want the title to be very clear, but am quite limited in the words I have available there.
I think the latter quote is pretty disingenuous—if you quote the rest of that sentence, the beginning is “I think the best course of action is”, which makes it very clear that this is a claim about what I personally believe people should do:
To be clear, “in the service of effective altruism” there is meant to refer to fraud done for the purpose of advancing effective altruism, not that we have an obligation to not support fraud and that obligation is in the service of effective altruism.
Edit: To make that last point more clear, I chainged “to make clear that we don’t support fraud in the service of effective altruism” to “to make clear that we don’t support fraud done in the service of effective altruism”.
I still get a strong feeling of group think every time I see the title of the post, and feel a strong sense of something invading into my thought-space in a way that feels toxic to me. For some reason this feels even stronger in the Twitter post you made:
I don’t know, I just feel like this is some kind of call-to-action that is trying to bypass my epistemic defenses.
The Twitter post is literally just title + link. I don’t like Twitter, and don’t want to engage on it, but I figured posting this more publicly would be helpful, so I did the minimum thing to try to direct people to this post.
From my perspective, I find it pretty difficult to be criticized for a “feeling” that you get from my post that seems to me to be totally disconnected from anything that I actually said.
Yeah, I am sorry. Like, I don’t think I currently have the energy to try to communicate all the subtle things that feel wrong to me about this, but it adds up to something I quite dislike.
I wish I had a more crystallized quick summary that I expect to cross the inferential distance quickly, but I don’t currently.
FWIW when I first saw the title (on the EA Forum) my reaction was to interpret it with an implicit “[I think that] We must be very clear: fraud in the service of effective altruism is unacceptable”.
Things generally don’t just become true because people assert them to be—surely people on LW know that. I think habryka’s concern that not including “I think” in the title is a big deal is overblown. Dropping “I think” from the title is reasonable IMO to make the title more concise; I don’t anticipate it degrading the culture of LW. I also don’t see how it “bypasses epistemic defenses.” If the lack of inclusion of an “I think” in your title will worsen readers’ epistemics, then those readers seem to be at great risk of getting terrible epistemics from seeing any news headlines.
I don’t mean to say that there’s not value in using more nuanced language, including “I think” and similar qualifications to be more precise with ones words, just that I think the karma/vote ratio your post received is an over-reaction to concern about posts of your style degrading the level one “Attempt to describe the world accurately” culture of LW.
IDK where habryka is coming from, but to me the post is good, and the title is fine but gives a twinge from the words “We” and “must” and those words together. (Also the phrase “is unacceptable” also is implicitly speaking from a social-collective-objective perspective, if you know what I mean. Which is fine, but it contributes to the twinge.) Things that would, to me, decrease the twinge:
EAs should be....
EA must unambiguously not accept fraud...
That’s a low-character-count way to be a bit more specific about who We is, to whom something Is Unacceptable. It’s maybe not what you really mean, maybe you really mean something more complicated like “people who want to ambitiously do good in the world” or something, and you don’t have a low-character way to say that, and “We” is aspirationally pointing at that.
In the post you clarify
and say
Which is reasonable. The title though, by touching on the We, seems to me to “make it” a “decision that is the group’s decision”.
it sure is a call to action, your epistemic defenses had better be good enough to figure out that it is a good one, because it is, and it is correct to pressure you about it. the fact that you’re uncertain about whether I am right does not mean that I am uncertain. it is perfectly alright to say you’re not sure if I’m right. but being annoyed at people for saying you should probably come to this conclusion is not reasonable when that conclusion is simply actually objectively justified—instead say you will have to think about it because you aren’t sure if you see the justification yet, or something, and remember that you don’t get to exclude comments from affecting your reputation, ever. if there’s a way you can encode your request for courtesy about required updates that better clarifies that you are in fact willing to make updates that do turn out to be important and critical moral cooperation policy updates, then finding that method of phrasing may in fact be positive expected value for the outside world because it would help people request moral updates of each other in ways that are not pushing too hard. but it is often correct to push. do not expect people to make an exception because the phrasing was too much pressure.
I think it’s useful for that as well, but I think it’s primarily a place to pursue truth and rationality (“the nameless virtue”).
I think the main issue here is that Less Wrong is not Effective Altruism, and that many (at a guess, most) LW members are not affiliated with EA or don’t consider themselves EAs. So from that perspective, while this post makes sense in the EA forum, it makes relatively little sense on LW, and to me looks roughly like being asked to endorse or disavow some politician X. (And if I extend the analogy, it’s inevitably about a US politician even though I live in another country.)
So this specific EA forum post is just a poor fit for reposting on LW without a complete rewrite.
That said, the core sentiment (ethics; fraud is bad; the ends don’t justify the means; etc.) obviously does have a place on LW, so there’s probably a way to write a dispassionate current-events take on e.g. this post from the Sequences.
I wanted to thank you for writing this comment—while I have also been reasonably active on social media about this topic, and playing level 3+ games is sometimes necessary in the real world, I don’t think this post actually offers any substantive content that goes beyond “fraud is bad and FTX was involved in fraudulent activities”.
I agree that it’s not a good fit for LW, though I think the post does fit in the EA Forum given recent events.
Yep, I also agree on the object level, but if the proposal is “we should collectively communicate something to public”, then we should probably also get some feedback from people who have non-zero experience with communicating to public. Not about the message, but about its form.
For example, when I see people saying things like “We must all collectively condemn X”, I take it as evidence that many people support X… otherwise there would be no need to go hysterical, right? If it was just one person, you might simply say: “hey, John Doe is not one of us, do not listen to him if he speaks in our name”.
So in situations like this, we need to avoid not just lying, but also telling the truth in a way that predictably leads people to an opposite conclusion. (“They said X. In this business, when someone says X, they actually mean Y. Therefore, Y.”) Speaking for myself, I have no idea how to do it, because I have zero expertise in this area. When someone proposes a communication strategy, I would like to know what is their experise.
Of course, speaking for themselves, anyone is free to say anything. But for speaking in the name of a community, it would be nice to know the rules for “speaking in the name of a community” before doing so. There are such things as protesting too much. There are such things as creating associations; you keep saying “X is not Y”, and people remember “X is… uhm… somehow associated with Y”.