Is there something you find particularly interesting here? There’s a couple things it gets sorta right (the historical role certain parts of EA had in terms of influencing OpenAI, and arguably current-day role w.r.t. Anthropic) but the idea that EA thinks that x-risk reduction is a matter of creating ever-more-powerful LLMs is so not-even-wrong that there isn’t really any useful lesson I can imagine drawing from this, and if you don’t already know the history then your beliefs would be less wrong if you ignored this altogether.
I think it’s actually kinda reasonable for an outside observer to look at where all the money is going, see that EA money is funding Anthropic and OpenAI, seeing what those orgs are doing, and paying more attention to the output than to what sounds the people arguing on the internet are making.
The problem with this article is that it doesn’t use the terms “Billionaire” and “white male” more. If she had explained to me just a couple more times that alignment researchers tend to be white men I would have been convinced.
Mischaracterizations, misleading language, and false dichotomies count as misinformation. Just because it’s prevalent on the modern internet doesn’t change the fact that it misdirects people in a tangential direction away from having accurate models of reality.
What makes internet content misinformation is about how manipulative and misleading their piece is, not about plausible deniability that the author could have unintentionally gotten something wrong or thinking suboptimal thoughts. Real life interactions have lower standards for misinformation, because the internet contains massive billion-dollar industries for lying to people at large scale, with the industry systematically being optimized to make the authors and outlets immune to accusations of outright lying.
Oh, it’s gebru. Yeah, she’s a bit dug in on some of her opinions in ways I don’t think are exactly true, but overall, I agree with most of her points. My key point remains—most of her criticisms are pretty reasonable, and saying “this is misinformation!” is not a useful response to a post with a bunch of reasonable criticisms applied to bucket-errored descriptions. Seems like she’s correctly inferring that the money has had a corrupting influence, which is a point I think many effective altruists should be drastically more worried about at all times, forevermore; but she’s also describing a problem-containing system from a distance while trying to push against people crediting parts of it that don’t deserve the given credit, and so her discrediting is somewhat misaimed. Since I mostly agree with her, we’d have to get into the weeds to be more specific.
She’s trying to take down a bad system. I see no reason to claim she shouldn’t; effective altruists should instead help take down that bad system and prove they have done so, but refuse to give up their name. Anything that can accurately be described “Effective altruism” is necessarily better than “ineffective altruism”; to the degree her post is a bad one, it’s because of conflating names, general social groups, and specific orgs. It’s a common practice for left-leaning folks to do such things, and I do think it brings discourse down, but as a left-leaning folk myself, I try to respond to it by improving the discourse and not wasting words on taking sides. I don’t disagree with your worry, but I think the way to respond to commentary like this is to actually discuss which parts of the criticism you can agree with.
But, more importantly—that’s already in progress, and your post’s title and contents don’t really give me a way to take action. It’s just a post of the article.
Is there something you find particularly interesting here? There’s a couple things it gets sorta right (the historical role certain parts of EA had in terms of influencing OpenAI, and arguably current-day role w.r.t. Anthropic) but the idea that EA thinks that x-risk reduction is a matter of creating ever-more-powerful LLMs is so not-even-wrong that there isn’t really any useful lesson I can imagine drawing from this, and if you don’t already know the history then your beliefs would be less wrong if you ignored this altogether.
I think it’s actually kinda reasonable for an outside observer to look at where all the money is going, see that EA money is funding Anthropic and OpenAI, seeing what those orgs are doing, and paying more attention to the output than to what sounds the people arguing on the internet are making.
The problem with this article is that it doesn’t use the terms “Billionaire” and “white male” more. If she had explained to me just a couple more times that alignment researchers tend to be white men I would have been convinced.
downvote because this isn’t misinformation, just good external criticism of a similar type to what internal criticism tends to look like anyway
Mischaracterizations, misleading language, and false dichotomies count as misinformation. Just because it’s prevalent on the modern internet doesn’t change the fact that it misdirects people in a tangential direction away from having accurate models of reality.
What makes internet content misinformation is about how manipulative and misleading their piece is, not about plausible deniability that the author could have unintentionally gotten something wrong or thinking suboptimal thoughts. Real life interactions have lower standards for misinformation, because the internet contains massive billion-dollar industries for lying to people at large scale, with the industry systematically being optimized to make the authors and outlets immune to accusations of outright lying.
Oh, it’s gebru. Yeah, she’s a bit dug in on some of her opinions in ways I don’t think are exactly true, but overall, I agree with most of her points. My key point remains—most of her criticisms are pretty reasonable, and saying “this is misinformation!” is not a useful response to a post with a bunch of reasonable criticisms applied to bucket-errored descriptions. Seems like she’s correctly inferring that the money has had a corrupting influence, which is a point I think many effective altruists should be drastically more worried about at all times, forevermore; but she’s also describing a problem-containing system from a distance while trying to push against people crediting parts of it that don’t deserve the given credit, and so her discrediting is somewhat misaimed. Since I mostly agree with her, we’d have to get into the weeds to be more specific.
She’s trying to take down a bad system. I see no reason to claim she shouldn’t; effective altruists should instead help take down that bad system and prove they have done so, but refuse to give up their name. Anything that can accurately be described “Effective altruism” is necessarily better than “ineffective altruism”; to the degree her post is a bad one, it’s because of conflating names, general social groups, and specific orgs. It’s a common practice for left-leaning folks to do such things, and I do think it brings discourse down, but as a left-leaning folk myself, I try to respond to it by improving the discourse and not wasting words on taking sides. I don’t disagree with your worry, but I think the way to respond to commentary like this is to actually discuss which parts of the criticism you can agree with.
But, more importantly—that’s already in progress, and your post’s title and contents don’t really give me a way to take action. It’s just a post of the article.