y’all, reread your comments here and see if you can figure out why everyone is so worried that the ai safety people might be up to something nefarious. your immune systems are hyperactive about the idea of being asked to do something because it is moral. instead simply delete the part from your interpretation that labels this post as unfairly demanding and instead interpret it as a suggestion that we should be better at this sort of thing.
I am not being asked to do something because it is moral. I am being asked to do something because it is signaling. Evan is primarily telling me I’m obligated to do PR-control for EA, but that is something I do not actually care that much about and do not believe I am obligated to do, and that’s why I strong-downvoted the post.
There is no moral question really here about fraud. From reading the post it seems that Evan is not that uncertain about whether I or Zvi Mowshowitz or Eliezer Yudkowsky or really any LessWronger is in favor of fraud in pursuit of Effective Altruism. He seems to me fairly confident of that assumption and is following it up by explicitly asking us to do as much signaling as we can (“and I mean all of us, anyone who has any sort of a public platform”). It currently reads to me that you have conflated ‘being ethical’ with ‘signaling that you are ethical’, and I think that’s a pretty substantial mistake.
I expect I will always downvote posts that command everyone to do this sort of signaling, I think they have almost no place on LessWrong. This is not a healthy way of coordinating people around ethical norms.
fwiw I think gears’ comment is sort of directionally right.
I think there is something important Oli and John were correct to be defending on LessWrong, and epistemic culture, and prevent things from moving towards higher simulacra levels, etc. But, also the LessWrong-style way of doing things feels kinda stunted at coordination.
There’s sort of a package deal that (much of) society offers on how to do moral coordination (see: “Simulacrum 3 As Stag-Hunt Strategy”), which has a lot of problems, both epistemically, strategically and morally. My sense is that LW-er types are often trying to roll their own coordination schemes, and this will (hopefully) eventually result in something better and epistemic/lawfully grounded. But in the meantime it means there’s a lot of obvious tools we don’t have access to (including “interfacing morally/coordinationaly with much of the rest of the world”, which is one of the key points of, well, morality and coordination).
I endorse making the overall tradeoff, but it seems like it should come with more awareness that… like, we’re making a tradeoff by having our memetic immune system trigger this hard. Not just uniformly choosing a better option.
...
Followup note: there’s a distinction between “what is right for LessWrong” and “what is right for the broader rationalsphere on EA Forum and Twitter and stuff.” I think Oli had criticized both, separately. LessWrong is optimizing especially hard for epistemics/intellectual-progress as much as possible, and I think that’s correct. It’s less obvious to me whether it’s bad that this post got 600 karma on EA Forum. In my dream world, the whole EAcosystem has better coordination and/or morality tech that doesn’t route with Simulacrum 3 signaling games that are vulnerable to co-option. But I think we’re still in an uncanny valley of coordination theory/practice and not sure what the right approach is for non-LW discourse, in the meantime.
I don’t actually have a strong belief that the OP is good at the goal it’s trying to accomplish. Just, the knee-jerk reaction to it feels like it has a missing mood to me.
I am not being asked to do something because it is moral. I am being asked to do something because it is signaling. Evan is primarily telling me I’m obligated to do PR-control for EA, but that is something I do not actually care that much about and do not believe I am obligated to do, and that’s why I strong-downvoted the post.
Seems like a pretty blatant misrepresentation of what I wrote. In justifying why I think you have an obligation to condemn fraud in the service of effective altruism, I say:
Assuming FTX’s business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms.
That’s pretty clearly a moral argument and not about PR at all.
[oh man, my underuse of punctuation makes this hard to read. editing that now, sorry it’s hard with voice recognition to add enough punctuation.]
I think I actually have a different view of moral communication than you—clarifying your stance after someone casts doubt on your personal commitment to honesty, via their demonstration that stances you have expressed yourself have lead them to concerning conclusions, warrants communicating clearly yourself by action and word that you disagree with their concerning conclusions, in order to ensure that the word bindings of your own moral philosophy are actually connected by example to the behaviors you wish your moral philosophy to refer to in the territory. map-words lose their meaning if not used to associate with the territory they’re intended to; this “signaling”, as you refer to it, is the task of showing-not-telling that your moral philosophy means what it’s intended to after a major implementation of that moral philosophy reveals that it was corrupt.
To put it another way: someone has cast doubt on the algorithms “utilitarianism” and “ea”, and so to the degree you share those algorithms, you now need to check for bugs in your implementation of them. Someone published an incident of a CVE in utilitarianism getting exploited (by SBF); it’s important, after an error, to actually run the exception handler. one of the steps of the exception handler should in fact be to figure out what you wish to demand of others as part of your moral coprotection-establishment communication process; your insistence that it is not reasonable for others to have an edge of demand when they ask you to participate in clarifying ground rules of morality is understandable because of the pressure that can imply, but despite that I understand why you hesitate, I expect you to figure out how to concisely say that you are not ends-justify-the-means pure utilitarian.
For further discussion, I’d suggest reading the EA forums post—it goes into this stuff in detail in the comments and there are great discussions being had.
And to be clear, I do expect that there is some form of this moral self-description that you would already attempt to describe yourself as being bound to follow by honor and promise. I don’t think that the allergic reaction folks have to someone being like “we must clarify our moral stance” is completely unwarranted, attempts to clarify moral stance overconfidently can in fact cause harm and themselves become risks to community security. But I think the level of allergic response is overcalibrated, and y’all should consider that the level of allergy to clarifying your viewpoint is in fact an example of a bad pattern in intelligent-being group social behavior. Verifying co-protection is hard, and its understandable for your immune system to have reactions to others’ immune systems, but it is important to figure out how to participate in the multi-agent immune system in ways that are grounded, predictive, honest, accurate. in a significant sense, this sort of error checking is the multi-agent safety problem we need to solve for agi.
You make some reasonable points. I think it would be quite good for the EA ecosystem to now take steps to (a) make sure it isn’t possible in the future (nor has already happened elsewhere) that someone who will take unethical action to get money and respect can gain this much power and leadership in the community, and (b) make costly, public signals that it thinks fraud is immoral and unacceptable, so that future trade partners are able to trust the morality of the EA ecosystem. I think there are healthy ways of doing that, I suspect for the latter for example some form of survey or signed letter for the latter would be a good step (e.g. “We believe the principles of Effective Altruism are inconsistent with defrauding people — signed by10,000 people who subscribe to the principles of Effective Altruism”), and I think there are other more substantive ideas here too. This has also left me thinking about ideas for how to do the former thing in my own spaces, including various whisteblower setups.
I think the thing that you’re most missing is that I have not been pushing on EA marketing and EA growth for many years nor explicitly speaking on its behalf or as a representative of it, and a lot of what has been said by the marketing people has not reflected me or had my buy-in. For 4-5 years now this has increasingly been not my movement, especially in terms of growth and publicity. I still use a lot of the principles and respect some of the people involved, but the love is substantially gone. I specifically feel like the post is asking me to post on my social media (it talks about people with any ‘public platform’) and participate in propagating the ‘collective beliefs’ of the movement, as though I have been previously involved in saying the ‘collective beliefs’ in the direction of growth of EA on social media and in old-media or thought that it was a good idea, as opposed to thinking most of the public-facing marketing has been horrendous, costly, and net-negative. Over many years some other people went and tried to publicly say what ‘we believe’ which I found alienating and epistemically suspect, and now that a bunch of the moral respect has been burned, I’m being demanded to come in and take responsibility for propagating more of what ‘we’ stand for in those communication channels, with communication tools I was against in the first place. Like, insofar as I had spoken in this way and lent my word to it, then I think the post title would be far more reasonable, but I’ve almost entirely been against it and seen it done against my desires, and I have felt alienated. At this point I’m open to, and might do so, if asked respectfully and not in a way that implied I was already bought-in and not being a team player for not being bought-in, but I am strongly resistant to the implication that I am obligated to show up to propagate a collective belief because I do not endorse propagating these collective beliefs in general. I understand that it may looks like EA is inconsistent and shameful from the outside, but I am not responsible for the inconsistency in its public messaging, I was against advertising collective beliefs and have not been doing so.
I think it’s correct for me to personally take some shoulder of blame for the bad consequences of EA, which I have been in a good trade relationship with (organizing retreats, building software for, etc), and I’m still thinking on what to do about that.
I also agree it’s a time for moral reflection.
I agree it’s important for the EA ecosystem to send a strong signal of saying that fraud is not permitted by EA principles. I think that if anyone wants me to participate in that signal, they ought to put in some hard work and find a route that does not sacrifice my epistemological principles in the doing of it, even if it is urgent to do so for the reputation of the ecosystem. The epistemology is just really not for the giving up. Yes, what Sam+Caroline did was probably horrendous (I am not 100% certain, more information may come to light). Yes, if so the EA ecosystem must clearly send costly signals that it does not endorse this behavior in order to continue to be respected as a moral entity. But I’m not okay with that method being a post whose title isn’t trying to inform, but is instead saying words because of the coordination effects it hopes to have on people, and is (in a pretty key point in time) trying to move speech acts from the truth toward signaling.
P.S. It’s one o’clock in the morning. I will later regret not reflecting more on this comment before posting, but I will also regret not posting anything at all because I am busy all of tomorrow and won’t be able to reply then either, and I’d like to respond to this promptly. Which is to say, I may later on realize I don’t quite endorse something or other I said here.
You make interesting points in return! And I have no strong disagreement with any of them. certainly I do think that none of us are SBF, but we are lower graph distance than many, and had some path proximity to algorithm choice he appears to have potentially used. seems like we’re near the same page here.
y’all, reread your comments here and see if you can figure out why everyone is so worried that the ai safety people might be up to something nefarious. your immune systems are hyperactive about the idea of being asked to do something because it is moral. instead simply delete the part from your interpretation that labels this post as unfairly demanding and instead interpret it as a suggestion that we should be better at this sort of thing.
I am not being asked to do something because it is moral. I am being asked to do something because it is signaling. Evan isprimarilytelling me I’m obligated to do PR-control for EA, but that is something I do not actually care that much about and do not believe I am obligated to do, and that’s why I strong-downvoted the post.There is no moral question really here about fraud. From reading the post it seems that Evan is not that uncertain about whether I or Zvi Mowshowitz or Eliezer Yudkowsky or really any LessWronger is in favor of fraud in pursuit of Effective Altruism. He seems to me fairly confident of that assumption and is following it up by explicitly asking us to do as much signaling as we can (“and I mean all of us, anyone who has any sort of a public platform”). It currently reads to me that you have conflated ‘being ethical’ with ‘signaling that you are ethical’, and I think that’s a pretty substantial mistake.
I expect I will always downvote posts that command everyone to do this sort of signaling, I think they have almost no place on LessWrong. This is not a healthy way of coordinating people around ethical norms.
Edit: First paragraph was a mistake.
fwiw I think gears’ comment is sort of directionally right.
I think there is something important Oli and John were correct to be defending on LessWrong, and epistemic culture, and prevent things from moving towards higher simulacra levels, etc. But, also the LessWrong-style way of doing things feels kinda stunted at coordination.
There’s sort of a package deal that (much of) society offers on how to do moral coordination (see: “Simulacrum 3 As Stag-Hunt Strategy”), which has a lot of problems, both epistemically, strategically and morally. My sense is that LW-er types are often trying to roll their own coordination schemes, and this will (hopefully) eventually result in something better and epistemic/lawfully grounded. But in the meantime it means there’s a lot of obvious tools we don’t have access to (including “interfacing morally/coordinationaly with much of the rest of the world”, which is one of the key points of, well, morality and coordination).
I endorse making the overall tradeoff, but it seems like it should come with more awareness that… like, we’re making a tradeoff by having our memetic immune system trigger this hard. Not just uniformly choosing a better option.
...
Followup note: there’s a distinction between “what is right for LessWrong” and “what is right for the broader rationalsphere on EA Forum and Twitter and stuff.” I think Oli had criticized both, separately. LessWrong is optimizing especially hard for epistemics/intellectual-progress as much as possible, and I think that’s correct. It’s less obvious to me whether it’s bad that this post got 600 karma on EA Forum. In my dream world, the whole EAcosystem has better coordination and/or morality tech that doesn’t route with Simulacrum 3 signaling games that are vulnerable to co-option. But I think we’re still in an uncanny valley of coordination theory/practice and not sure what the right approach is for non-LW discourse, in the meantime.
I don’t actually have a strong belief that the OP is good at the goal it’s trying to accomplish. Just, the knee-jerk reaction to it feels like it has a missing mood to me.
(Upvote-disagree.)
Seems like a pretty blatant misrepresentation of what I wrote. In justifying why I think you have an obligation to condemn fraud in the service of effective altruism, I say:
That’s pretty clearly a moral argument and not about PR at all.
I think that’s a mistake. Retracted. Will see if I can come back to this in the next day or two and clean up what I was saying a bit more.
You make a virtue ethics moral argument at a place that’s dominated by utilitarian ethics.
[oh man, my underuse of punctuation makes this hard to read. editing that now, sorry it’s hard with voice recognition to add enough punctuation.]
I think I actually have a different view of moral communication than you—clarifying your stance after someone casts doubt on your personal commitment to honesty, via their demonstration that stances you have expressed yourself have lead them to concerning conclusions, warrants communicating clearly yourself by action and word that you disagree with their concerning conclusions, in order to ensure that the word bindings of your own moral philosophy are actually connected by example to the behaviors you wish your moral philosophy to refer to in the territory. map-words lose their meaning if not used to associate with the territory they’re intended to; this “signaling”, as you refer to it, is the task of showing-not-telling that your moral philosophy means what it’s intended to after a major implementation of that moral philosophy reveals that it was corrupt.
To put it another way: someone has cast doubt on the algorithms “utilitarianism” and “ea”, and so to the degree you share those algorithms, you now need to check for bugs in your implementation of them. Someone published an incident of a CVE in utilitarianism getting exploited (by SBF); it’s important, after an error, to actually run the exception handler. one of the steps of the exception handler should in fact be to figure out what you wish to demand of others as part of your moral coprotection-establishment communication process; your insistence that it is not reasonable for others to have an edge of demand when they ask you to participate in clarifying ground rules of morality is understandable because of the pressure that can imply, but despite that I understand why you hesitate, I expect you to figure out how to concisely say that you are not ends-justify-the-means pure utilitarian.
For further discussion, I’d suggest reading the EA forums post—it goes into this stuff in detail in the comments and there are great discussions being had.
And to be clear, I do expect that there is some form of this moral self-description that you would already attempt to describe yourself as being bound to follow by honor and promise. I don’t think that the allergic reaction folks have to someone being like “we must clarify our moral stance” is completely unwarranted, attempts to clarify moral stance overconfidently can in fact cause harm and themselves become risks to community security. But I think the level of allergic response is overcalibrated, and y’all should consider that the level of allergy to clarifying your viewpoint is in fact an example of a bad pattern in intelligent-being group social behavior. Verifying co-protection is hard, and its understandable for your immune system to have reactions to others’ immune systems, but it is important to figure out how to participate in the multi-agent immune system in ways that are grounded, predictive, honest, accurate. in a significant sense, this sort of error checking is the multi-agent safety problem we need to solve for agi.
You make some reasonable points. I think it would be quite good for the EA ecosystem to now take steps to (a) make sure it isn’t possible in the future (nor has already happened elsewhere) that someone who will take unethical action to get money and respect can gain this much power and leadership in the community, and (b) make costly, public signals that it thinks fraud is immoral and unacceptable, so that future trade partners are able to trust the morality of the EA ecosystem. I think there are healthy ways of doing that, I suspect for the latter for example some form of survey or signed letter for the latter would be a good step (e.g. “We believe the principles of Effective Altruism are inconsistent with defrauding people — signed by10,000 people who subscribe to the principles of Effective Altruism”), and I think there are other more substantive ideas here too. This has also left me thinking about ideas for how to do the former thing in my own spaces, including various whisteblower setups.
I think the thing that you’re most missing is that I have not been pushing on EA marketing and EA growth for many years nor explicitly speaking on its behalf or as a representative of it, and a lot of what has been said by the marketing people has not reflected me or had my buy-in. For 4-5 years now this has increasingly been not my movement, especially in terms of growth and publicity. I still use a lot of the principles and respect some of the people involved, but the love is substantially gone. I specifically feel like the post is asking me to post on my social media (it talks about people with any ‘public platform’) and participate in propagating the ‘collective beliefs’ of the movement, as though I have been previously involved in saying the ‘collective beliefs’ in the direction of growth of EA on social media and in old-media or thought that it was a good idea, as opposed to thinking most of the public-facing marketing has been horrendous, costly, and net-negative. Over many years some other people went and tried to publicly say what ‘we believe’ which I found alienating and epistemically suspect, and now that a bunch of the moral respect has been burned, I’m being demanded to come in and take responsibility for propagating more of what ‘we’ stand for in those communication channels, with communication tools I was against in the first place. Like, insofar as I had spoken in this way and lent my word to it, then I think the post title would be far more reasonable, but I’ve almost entirely been against it and seen it done against my desires, and I have felt alienated. At this point I’m open to, and might do so, if asked respectfully and not in a way that implied I was already bought-in and not being a team player for not being bought-in, but I am strongly resistant to the implication that I am obligated to show up to propagate a collective belief because I do not endorse propagating these collective beliefs in general. I understand that it may looks like EA is inconsistent and shameful from the outside, but I am not responsible for the inconsistency in its public messaging, I was against advertising collective beliefs and have not been doing so.
I think it’s correct for me to personally take some shoulder of blame for the bad consequences of EA, which I have been in a good trade relationship with (organizing retreats, building software for, etc), and I’m still thinking on what to do about that.
I also agree it’s a time for moral reflection.
I agree it’s important for the EA ecosystem to send a strong signal of saying that fraud is not permitted by EA principles. I think that if anyone wants me to participate in that signal, they ought to put in some hard work and find a route that does not sacrifice my epistemological principles in the doing of it, even if it is urgent to do so for the reputation of the ecosystem. The epistemology is just really not for the giving up. Yes, what Sam+Caroline did was probably horrendous (I am not 100% certain, more information may come to light). Yes, if so the EA ecosystem must clearly send costly signals that it does not endorse this behavior in order to continue to be respected as a moral entity. But I’m not okay with that method being a post whose title isn’t trying to inform, but is instead saying words because of the coordination effects it hopes to have on people, and is (in a pretty key point in time) trying to move speech acts from the truth toward signaling.
P.S. It’s one o’clock in the morning. I will later regret not reflecting more on this comment before posting, but I will also regret not posting anything at all because I am busy all of tomorrow and won’t be able to reply then either, and I’d like to respond to this promptly. Which is to say, I may later on realize I don’t quite endorse something or other I said here.
You make interesting points in return! And I have no strong disagreement with any of them. certainly I do think that none of us are SBF, but we are lower graph distance than many, and had some path proximity to algorithm choice he appears to have potentially used. seems like we’re near the same page here.