I’ve seen it several times on Twitter, Reddit, and HN, and that’s excluding the people like Jack Clark who has pondered it repeatedly in his Import.ai newsletter & used it as theme in some of his short stories (but much more playfully & thoughtfully in his case so he’s not the target here). I think probably the one that annoyed me enough to write this was when Imagen hit HN and the second lengthy thread was all about ‘poisoning the well’ with most of them accepting the premise. It has also been asked here on LW at least twice in different places. (I’ve also since linked this writeup at least 4 times to various people asking this exact question about generative models choking on their own exhaust, and the rise of ChatGPT has led to it coming up even more often.)
I’ve seen it several times on Twitter, Reddit, and HN, and that’s excluding the people like Jack Clark who has pondered it repeatedly in his Import.ai newsletter & used it as theme in some of his short stories (but much more playfully & thoughtfully in his case so he’s not the target here). I think probably the one that annoyed me enough to write this was when Imagen hit HN and the second lengthy thread was all about ‘poisoning the well’ with most of them accepting the premise. It has also been asked here on LW at least twice in different places. (I’ve also since linked this writeup at least 4 times to various people asking this exact question about generative models choking on their own exhaust, and the rise of ChatGPT has led to it coming up even more often.)