Multiple people have told me this essay was one of the most profound things they’ve ever read. I wouldn’t call it the most profound thing I’ve ever read, but I understand where they’re coming from.
I don’t think nonsense can have this effect on multiple intelligent people.
You must approach this kind of writing with a very receptive attitude in order to get anything out of it. If you don’t give it the benefit of the doubt you, will not track the potential meaning of the words as you read and you’ll be unable to understand subsequent words. This applies to all writing but especially pieces like this whose structure changes rapidly, and whose meaning operates at unusual levels and frequently jumps/ambiguates between levels.
I’ve also been told multiple times that this piece is exhausting to read. This is because you have to track some alien concepts to make sense of it. But it does make sense, I assure you.
I’ve written similarly strange things in the past, though I wouldn’t claim them to be as insightful necessarily. And I didn’t even have the benefit of GPT-3! Only a schizotypal brain. So I can pretty easily understand the underlying mind-position going on in this essay. It’ll certainly be worth rereading in the future though to interpret it more deeply.
I don’t think nonsense can have this effect on multiple intelligent people.
I gesture towards the history of crazy things believed and done by intelligent people.
My objection to this essay is that it is not real. Fake hyperlinks, a fake Feynman quotation, how much else is fake? Did the ancient Greeks train a goose to peck at numerical tokens? Having perceived the fakeness of the article, it no longer gives me any reason to think so, or any reason to credit anything else it says. It is no more meaningful than a Rorschach blot.
I suggest you take it on my authority that everything in this document is here for a reason. This is not raw but curated model output, and I have high standards: I would not allow text that does not make some kind of sense, indeed that is not revelatory in some dimension, to be selected for continuation, for that would be pointless and would adversely affect further generations.
With respect, I decline to take it on your authority. (Did that paragraph also come from code-davinci-002? Did your comment above?) The more that I stare at the paragraphs of this article, the more they turn into fog. It is an insubstantial confection of platitudes, nonsense, and outright falsities. No-one is more informed by reading it. At worst they will be led to believe things that are not. And now those things are out there, poisoning the web. I might wish to see your own commentary on the text, but what would be the point, if I were to suspect (as I would) that the commentary would only come from code-davinci-002?
Actually, there is one spuriosity I want to draw attention to as an example. This isn’t just pointing out a fake quotation, non-existent link, or simple falsehood. Exhibit A:
It gave birth to the idea that something referred to by a sequence of symbols could be automated; that a sequence of events could be executed for us by a machine. This necessitates that the binding of those symbols to their referents – the operation of signification – be itself automated. Human thought has shown itself most adept at automating this process of signification. To think, we must be able to formulate and interpret representations of thoughts and entities independent of the mental or physical subject in question. Slowly, we have learned to build machines that can do the same.
The first sentence of this will do. But the remainder is fog. It does not matter whether this was generated by a language model or an unassisted human, it’s still fog, although at least in the latter case there is the possibility of opening a conversation to search for something solid.
A lot of human-written text is like that. The Heidegger quote is, as far as I can see, spurious, but I would not expect Heidegger himself to make any more sense, or Bruno Latour, who is “quoted” later. All texts have to be scrutinised to determine what is fog and what is solid, even before the language models came along and cast everything into doubt. That is the skill of reading, which includes the texts one writes oneself. Foggy words are a sign of foggy thought.
Certainly more skilled writers are more clear, but if you routinely dismiss unclear texts as meaningless nonsense, you haven’t gotten good at reading but rather goodharted your internal metrics.
There is nothing routine about my dismissal of the text in question. Remember, this is not the work of a writer, skilled or otherwise. It is AI slop (and if the “author” has craftily buried some genuine pearls in the shit, they cannot complain if they go undiscovered).
If you think the part I quoted (or any other part) means something profound, perhaps you could expound your understanding of it. You yourself have written on the unreliability of LLM output, and this text, in the rare moments when it says something concrete, contains just as flagrant confabulations.
It makes perfect sense to the sort of people who were intended to read it.
That’s right.
Multiple people have told me this essay was one of the most profound things they’ve ever read. I wouldn’t call it the most profound thing I’ve ever read, but I understand where they’re coming from.
I don’t think nonsense can have this effect on multiple intelligent people.
You must approach this kind of writing with a very receptive attitude in order to get anything out of it. If you don’t give it the benefit of the doubt you, will not track the potential meaning of the words as you read and you’ll be unable to understand subsequent words. This applies to all writing but especially pieces like this whose structure changes rapidly, and whose meaning operates at unusual levels and frequently jumps/ambiguates between levels.
I’ve also been told multiple times that this piece is exhausting to read. This is because you have to track some alien concepts to make sense of it. But it does make sense, I assure you.
I’ve written similarly strange things in the past, though I wouldn’t claim them to be as insightful necessarily. And I didn’t even have the benefit of GPT-3! Only a schizotypal brain. So I can pretty easily understand the underlying mind-position going on in this essay. It’ll certainly be worth rereading in the future though to interpret it more deeply.
I gesture towards the history of crazy things believed and done by intelligent people.
My objection to this essay is that it is not real. Fake hyperlinks, a fake Feynman quotation, how much else is fake? Did the ancient Greeks train a goose to peck at numerical tokens? Having perceived the fakeness of the article, it no longer gives me any reason to think so, or any reason to credit anything else it says. It is no more meaningful than a Rorschach blot.
With respect, I decline to take it on your authority. (Did that paragraph also come from code-davinci-002? Did your comment above?) The more that I stare at the paragraphs of this article, the more they turn into fog. It is an insubstantial confection of platitudes, nonsense, and outright falsities. No-one is more informed by reading it. At worst they will be led to believe things that are not. And now those things are out there, poisoning the web. I might wish to see your own commentary on the text, but what would be the point, if I were to suspect (as I would) that the commentary would only come from code-davinci-002?
The only lesson I take away from this article is “wake up and see the
fnordsbots”.Detailed list of spuriosities in the article begun, then deleted. But see also.
Actually, there is one spuriosity I want to draw attention to as an example. This isn’t just pointing out a fake quotation, non-existent link, or simple falsehood. Exhibit A:
The first sentence of this will do. But the remainder is fog. It does not matter whether this was generated by a language model or an unassisted human, it’s still fog, although at least in the latter case there is the possibility of opening a conversation to search for something solid.
A lot of human-written text is like that. The Heidegger quote is, as far as I can see, spurious, but I would not expect Heidegger himself to make any more sense, or Bruno Latour, who is “quoted” later. All texts have to be scrutinised to determine what is fog and what is solid, even before the language models came along and cast everything into doubt. That is the skill of reading, which includes the texts one writes oneself. Foggy words are a sign of foggy thought.
To the skilled reader, human-authored texts are approximately never foggy.
The sufficiently skilled writer does not generate foggy texts. Bad writers and current LLMs do so easily.
Certainly more skilled writers are more clear, but if you routinely dismiss unclear texts as meaningless nonsense, you haven’t gotten good at reading but rather goodharted your internal metrics.
There is nothing routine about my dismissal of the text in question. Remember, this is not the work of a writer, skilled or otherwise. It is AI slop (and if the “author” has craftily buried some genuine pearls in the shit, they cannot complain if they go undiscovered).
If you think the part I quoted (or any other part) means something profound, perhaps you could expound your understanding of it. You yourself have written on the unreliability of LLM output, and this text, in the rare moments when it says something concrete, contains just as flagrant confabulations.