Now that goalposts have moved from “these neural nets will never work and that’s why they’re bad” to “they are working and that’s why they’re bad”, a repeated criticism of DALL·E 2 etc is that their deployment will ‘pollute the Internet’ by democratizing high-quality media, which may (given all the advantages of machine intelligence) quickly come to exceed ‘regular’ (artisanally-crafted?) media, and that ironically this will make it difficult or impossible to train better models. I don’t find this plausible at all but lots of people seem to and no one is correcting all these wrong people on the Internet
Can I get a link to someone who actually believes this? I’m honestly a little skeptical this is a common opinion, but wouldn’t put it past people I guess.
I’ve seen it several times on Twitter, Reddit, and HN, and that’s excluding the people like Jack Clark who has pondered it repeatedly in his Import.ai newsletter & used it as theme in some of his short stories (but much more playfully & thoughtfully in his case so he’s not the target here). I think probably the one that annoyed me enough to write this was when Imagen hit HN and the second lengthy thread was all about ‘poisoning the well’ with most of them accepting the premise. It has also been asked here on LW at least twice in different places. (I’ve also since linked this writeup at least 4 times to various people asking this exact question about generative models choking on their own exhaust, and the rise of ChatGPT has led to it coming up even more often.)
Can I get a link to someone who actually believes this? I’m honestly a little skeptical this is a common opinion, but wouldn’t put it past people I guess.
I’ve seen it several times on Twitter, Reddit, and HN, and that’s excluding the people like Jack Clark who has pondered it repeatedly in his Import.ai newsletter & used it as theme in some of his short stories (but much more playfully & thoughtfully in his case so he’s not the target here). I think probably the one that annoyed me enough to write this was when Imagen hit HN and the second lengthy thread was all about ‘poisoning the well’ with most of them accepting the premise. It has also been asked here on LW at least twice in different places. (I’ve also since linked this writeup at least 4 times to various people asking this exact question about generative models choking on their own exhaust, and the rise of ChatGPT has led to it coming up even more often.)