I think one mindset that may be healthy is to remember:
Reality is too complex to be described well by a single idea (meme/etc.). If one responds to this by forcing each idea presented to be as good an approximation of reality as possible, then that causes all the ideas to become “colorless and blurry”, as any specific detail would be biased when considered on its own.
Therefore, one cannot really fight about whether an idea is biased in isolation. Rather, the goal should be to create a bag of ideas which in totality is as informative about a subject as possible.
I think you are basically right that the shoggoth meme is describing one of the most negative projections of what LLMs could be doing. One approach is to try to come with a single projection and try to convince everyone else to use this instead. I’m not super comfortable with that either because I feel like there’s a lot of uncertainty about what the most productive way to think about LLMs is, and I would like to keep options open.
Instead I’d rather have a collection of a list of different ways to think about it (you could think of this collection as a discrete approximation to a probability distribution). Such a list would have many uses, e.g. as a checklist or a reference to guide people to.
It does seem problematic for the rationalist community to refuse to acknowledge that the shoggoth meme presents LLMs as being scary monsters, but it also seems problematic to insist that the shoggoth meme exaggerates the danger of LLMs, because that should be classified based on P(meme danger > actual danger), rather than on the basis of meme danger > E[actual danger], as in, if there’s significant uncertainty about how how actually dangerous LLMs are then there’s also significant uncertainty about whether the memes exaggerate the danger; one shouldn’t just compare against a single point estimate.
I think one mindset that may be healthy is to remember:
Reality is too complex to be described well by a single idea (meme/etc.). If one responds to this by forcing each idea presented to be as good an approximation of reality as possible, then that causes all the ideas to become “colorless and blurry”, as any specific detail would be biased when considered on its own.
Therefore, one cannot really fight about whether an idea is biased in isolation. Rather, the goal should be to create a bag of ideas which in totality is as informative about a subject as possible.
I think you are basically right that the shoggoth meme is describing one of the most negative projections of what LLMs could be doing. One approach is to try to come with a single projection and try to convince everyone else to use this instead. I’m not super comfortable with that either because I feel like there’s a lot of uncertainty about what the most productive way to think about LLMs is, and I would like to keep options open.
Instead I’d rather have a collection of a list of different ways to think about it (you could think of this collection as a discrete approximation to a probability distribution). Such a list would have many uses, e.g. as a checklist or a reference to guide people to.
It does seem problematic for the rationalist community to refuse to acknowledge that the shoggoth meme presents LLMs as being scary monsters, but it also seems problematic to insist that the shoggoth meme exaggerates the danger of LLMs, because that should be classified based on P(meme danger > actual danger), rather than on the basis of meme danger > E[actual danger], as in, if there’s significant uncertainty about how how actually dangerous LLMs are then there’s also significant uncertainty about whether the memes exaggerate the danger; one shouldn’t just compare against a single point estimate.