This passes my vibe check, but I don’t know if I’ll agree with it after thinking about it. Right now, my rough thoughts are:
0. What do we even mean by “Doomerism”? Just that P(everyone will die or worse in a few decades) is very high? If so, that’s consistent with apocalyse-cults of all sorts, not just our own.[1] The people we call “Doomers” have a pretty different space of outcomes they’re worried about, and models generating that space, than e.g. a UFO abductee. Or a climate-change “believer” who thinks we’re dead in a couple of decades.
1. Doomerism says we should discuss capabilities updates strongly per se: it says we should look for signs of a miracle. That may be located in capabilities advances, or in an unexpected chance for co-operation or in alignment advances, which are also capabilities advances. There will be some discussion about the big breakthroughs capabilities. But IMO, there is much less focus on capabilities here than e.g. an ML forum. But the amount of such discussion has greatly increased lately, leading to the next point.
2. Unfortunately, I am unsure if Doomerism as-it-exists in this community will be communicated accurately to the public. Whether or not the resulting memes will develop a symbiotic relationship with capbilities advances remains worrying plausible. Though I am unsure of how exactly things will develop, I do think they’ll likely go badly. Look at the Covid rhetoric: did anyone anticipate it would evolve as it did? How many even anticipated that it would be as bad as it was? I certainly didn’t. Most of the capabilities advances seem driven by people with bad models of Doomerism.
3. What’s the actual evidence that Doomerism is driving capabilities advances? There’s the OpenAI, Anthropic, TruthAI exietence, which made co-ordination harder. I think OpenAI’s actions counterfactually sped up capabilities. Is there anything else on that scale?
This passes my vibe check, but I don’t know if I’ll agree with it after thinking about it. Right now, my rough thoughts are:
0. What do we even mean by “Doomerism”? Just that P(everyone will die or worse in a few decades) is very high? If so, that’s consistent with apocalyse-cults of all sorts, not just our own.[1] The people we call “Doomers” have a pretty different space of outcomes they’re worried about, and models generating that space, than e.g. a UFO abductee. Or a climate-change “believer” who thinks we’re dead in a couple of decades.
1. Doomerism says we should discuss capabilities updates strongly per se: it says we should look for signs of a miracle. That may be located in capabilities advances, or in an unexpected chance for co-operation or in alignment advances, which are also capabilities advances. There will be some discussion about the big breakthroughs capabilities. But IMO, there is much less focus on capabilities here than e.g. an ML forum. But the amount of such discussion has greatly increased lately, leading to the next point.
2. Unfortunately, I am unsure if Doomerism as-it-exists in this community will be communicated accurately to the public. Whether or not the resulting memes will develop a symbiotic relationship with capbilities advances remains worrying plausible. Though I am unsure of how exactly things will develop, I do think they’ll likely go badly. Look at the Covid rhetoric: did anyone anticipate it would evolve as it did? How many even anticipated that it would be as bad as it was? I certainly didn’t. Most of the capabilities advances seem driven by people with bad models of Doomerism.
3. What’s the actual evidence that Doomerism is driving capabilities advances? There’s the OpenAI, Anthropic, TruthAI exietence, which made co-ordination harder. I think OpenAI’s actions counterfactually sped up capabilities. Is there anything else on that scale?
;)