Is there something you find particularly interesting here? There’s a couple things it gets sorta right (the historical role certain parts of EA had in terms of influencing OpenAI, and arguably current-day role w.r.t. Anthropic) but the idea that EA thinks that x-risk reduction is a matter of creating ever-more-powerful LLMs is so not-even-wrong that there isn’t really any useful lesson I can imagine drawing from this, and if you don’t already know the history then your beliefs would be less wrong if you ignored this altogether.
I think it’s actually kinda reasonable for an outside observer to look at where all the money is going, see that EA money is funding Anthropic and OpenAI, seeing what those orgs are doing, and paying more attention to the output than to what sounds the people arguing on the internet are making.
Is there something you find particularly interesting here? There’s a couple things it gets sorta right (the historical role certain parts of EA had in terms of influencing OpenAI, and arguably current-day role w.r.t. Anthropic) but the idea that EA thinks that x-risk reduction is a matter of creating ever-more-powerful LLMs is so not-even-wrong that there isn’t really any useful lesson I can imagine drawing from this, and if you don’t already know the history then your beliefs would be less wrong if you ignored this altogether.
I think it’s actually kinda reasonable for an outside observer to look at where all the money is going, see that EA money is funding Anthropic and OpenAI, seeing what those orgs are doing, and paying more attention to the output than to what sounds the people arguing on the internet are making.