That was fascinating, at first I couldn’t help but internally mentally try to attribute this to you.
In the coming decade, most of the Internet will be a cesspool. We already have trolls and algorithmic censorship. We’ll have trollbots. It’s easier to have a cesspool then it is to keep trolls out.
The first read through I was nodding along still, and I had this interesting reaction where I was preparing to hear you say at the end “surprise! I wrote it all!”
Under this frame, hearing “In the coming decade, most of the Internet will be a cesspool” caused me to momentarily attribute a sort of respect for your willingness to be pessimistic and direct. Subjectively: maybe this is still your voice?
But then… I was trying to figure out what it would even mean that it was harder to keep trolls out than to “have a cesspool”.
Like yes: pumping entropy out of a system is hard.
But is there a balancing test implied? A cost vs a benefit? A implicit claim that “such costs exist at all” and this could be used as a reason to be in favor of tolerating the cesspool?
Maybe it is a clever indirect insult to the operators of algorithmic censorship machines, because the wording implied that “the cesspool” was going to be full of “trolls and censorship” and so their shit counted as “inclusive of the experience of the cesspool” based on the phrasing, and this could be read as a deniable way to be “in favor of the commentariate, not just blindly in favor of the OP” (which has been a huge real dividing line in “online” politics since maybe 2015?)… and then under the guise of such deniable wording one can see how the constant intrusive deletion “naive cess” (with false positives! and false negatives!) actually makes it harder to make careful sense of any of the pool of data, and causes one to stop using the system that’s censoring this way because the censorship is “just more robotic cess” to have to mentally subtract in ANOTHER step of nuanced interpretation?
But reading this tea leaves to figure out “Which side are you on? What’s your angle?”...
(Where this intent is what the text would “mean”...)
...caused it to click definitely for me: this text had no side. There is no angle.
This is just bot output that’s too dumb to have such things yet <3
The rest got progressively worse until it was the sort of thing that is impressive the way a five year old’s art is impressive, as a sign of what a good little kid they are, and how much potential they are showing.
Maybe once the bots hit “age 8 equivalent” (like into its age of reason) things will get more interesting?
But for a static piece of software (with no actual memory, and no online conversational learning) the only way I can naively see to implement such output would be if it had a static model that could sort of “run forward”, and mechanically “encompass” the totality of an 8 year’s knowledge and an 8 year old’s repairable ignorance and then carefully feign this ignorance, and feign the resolution of the ignorance, when such feigning is required to generate a highly plausible piece of text.
And then if all the ideas and material “needed to feign an eight year old’s goal-directed ignorance repair, and thus and seem capable of reason” existed somewhere in the static model’s contents… well… that would be impressive I think, but also imply an overhang of sorts?
That was fascinating, at first I couldn’t help but internally mentally try to attribute this to you.
The first read through I was nodding along still, and I had this interesting reaction where I was preparing to hear you say at the end “surprise! I wrote it all!”
Under this frame, hearing “In the coming decade, most of the Internet will be a cesspool” caused me to momentarily attribute a sort of respect for your willingness to be pessimistic and direct. Subjectively: maybe this is still your voice?
But then… I was trying to figure out what it would even mean that it was harder to keep trolls out than to “have a cesspool”.
Like yes: pumping entropy out of a system is hard.
But is there a balancing test implied? A cost vs a benefit? A implicit claim that “such costs exist at all” and this could be used as a reason to be in favor of tolerating the cesspool?
Maybe it is a clever indirect insult to the operators of algorithmic censorship machines, because the wording implied that “the cesspool” was going to be full of “trolls and censorship” and so their shit counted as “inclusive of the experience of the cesspool” based on the phrasing, and this could be read as a deniable way to be “in favor of the commentariate, not just blindly in favor of the OP” (which has been a huge real dividing line in “online” politics since maybe 2015?)… and then under the guise of such deniable wording one can see how the constant intrusive deletion “naive cess” (with false positives! and false negatives!) actually makes it harder to make careful sense of any of the pool of data, and causes one to stop using the system that’s censoring this way because the censorship is “just more robotic cess” to have to mentally subtract in ANOTHER step of nuanced interpretation?
But reading this tea leaves to figure out “Which side are you on? What’s your angle?”...
(Where this intent is what the text would “mean”...)
...caused it to click definitely for me: this text had no side. There is no angle.
This is just bot output that’s too dumb to have such things yet <3
The rest got progressively worse until it was the sort of thing that is impressive the way a five year old’s art is impressive, as a sign of what a good little kid they are, and how much potential they are showing.
Maybe once the bots hit “age 8 equivalent” (like into its age of reason) things will get more interesting?
But for a static piece of software (with no actual memory, and no online conversational learning) the only way I can naively see to implement such output would be if it had a static model that could sort of “run forward”, and mechanically “encompass” the totality of an 8 year’s knowledge and an 8 year old’s repairable ignorance and then carefully feign this ignorance, and feign the resolution of the ignorance, when such feigning is required to generate a highly plausible piece of text.
And then if all the ideas and material “needed to feign an eight year old’s goal-directed ignorance repair, and thus and seem capable of reason” existed somewhere in the static model’s contents… well… that would be impressive I think, but also imply an overhang of sorts?