“In the context of the Dead Internet Theory, this sort of technology could easily lead to the entire internet becoming a giant hallucination. I don’t expect this one bit, if only because it genuinely is not in anyone’s interest to do so: internet corporations rely heavily on ad revenue, and a madly hallucinating internet is the death of this if there’s too much of an overhang. Surveillance groups would like to not be bogged down with millions or billions of ultra-realistic bots obscuring their targets. An aligned AGI would likely rather the humans under its jurisdiction be given the most truthful information possible. The only people who benefit from a madly hallucinating internet are sadists and totalitarians who wish only to confuse. And admittedly, there are plenty of sadists and totalitarians in control; their desire for power, however, currently conflicts strongly with the profit motive driven by the need to actually sell products. You can’t sell products to a chatbot.”
Does this paragraph imply that a theoretical system of media involving an AGI producing the content would be manipulated by the manager of the system to expose viewers only to content that appeals to their [the owners] interests? Or would an AGI capable of producing content on a system like this end up like Tay, implying that “Hitler could do better [than Bush]” because users taught it to do that? Or would it learn to create content that appeals to individual groups of people with similar ideals, like how culture critics criticize the current, “non-dead” internet of?
This is one of my first times commenting on LessWrong.com, so I beg of you again to excuse any ignorance I may have posited.
Motley Fool comments on Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it’s going