I loved this post. Its overall presentation felt like a text version of a Christopher Nolan mind-bender.
The crescendo of clues about the nature of the spoiler: misattributed/fictional quotes; severe link rot even though the essay was just freshly published; the apparently ‘overwhelming’ level of academic, sophisticated writing style and range of references; the writing getting more chaotic as it talks about itself getting more chaotic. And of course, the constant question of what sort of spoiler could possibly ‘alter’ the meaning of the entire essay.
I loved the feeling of Inception near the end of the essay when, in the analyst’s voice, it confirms the reader’s likely prediction that it was written by AI, only to reveal how that ‘analyst’ section was also written by AI. Or rather that the voice fluidly changes between AI and analyst, first- and third-person. And when you finally feel like you’re on solid ground, the integrity of the essay breaks down; “<!--TODO-->” tags make you contend with how no part is certainly all-human or all-AI, and so, does it even matter who wrote it.
Returning to the spoiler and initial paragraphs after finishing the essay, and getting a profound, contextualized appreciation for what it means. You realize that the essay achieved what it told you it set out to; to convey a salient point through apparent nonsense, validating that such nonsense can be useful, as it explains the process of generating the nonsense. Or in the essay’s words, “[the] string of text can talk about itself [as it] unmask[s] the code hidden within itself.”
The post also shared concepts I now use when thinking about language. My favourite being quantum poetry, associating the artificial (and ‘next-token prediction’) to the humanistic:
Just as the presence of a particle always completely erases the ghost of its wavefunction, [...] so does the presence of a word erase the ghost of the manifold that could have been named. [...]
This is the principle of quantum poetics. The content of poetry is limited not by the poet’s vocabulary, but by the part of their soul that has not been destroyed by words they have used so far. [...] It is the quantum nature of reality that allows for unforeseeable events, stochastic processes, and the evolution of life. Similarly, it is the quantum nature of language that allows for the evolution of meaning, for creativity, for jokes, and for bottomless misunderstandings. [...]
[Generative] systems are entirely too good at hallucinating content that does not exist in the training corpus—content that creates meaningful structures that foster coherent fictive space where there is none. Ironically, this is exactly what we want in a poet—to create new worlds out of nothing but the coupling of waves of possibility drunk from memory
My main response to the essay’s content, is that still, a human in the loop seemed to still be the primary engine for most of the art in the essay. From my understanding of critical rationalism, personhood is mapped to the ability to creatively conjecture and criticize ideas to generate testable, hard to vary explanations of things.
This essay depended on a human analyst to evaluate and criticize (by some sense of ‘relevance’) which generation was valid enough to continue into the main branch of the essay. The essay also depended on a human to decide which original conjecture to write about (also by some sense of what’s ‘interesting’).
Therefore, it seems to me that AGI is still far from automating both of humans’ conjecture and criticism capacities. However, the holistic artistry the essay did push me to consider AGI’s validity more than other text I’ve read, and in that sense, it achieved what it meant to: to connect my prior thoughts to some new idea—both in the real domain—through ‘babble’ of the the imaginary domain.
Thank you so much for the intricate review. I’m glad that someone was able to appreciate the essay in the ways that I did.
I agree with your conclusion. The content of this essay is very much due to me, even though I wrote almost none of the words. Most of the ideas in this post are mine—or too like mine to have been an accident—even though I never “told” the AI about them. If you haven’t, you might be interested to read the appendix of this post, where I describe the method by which I steer GPT, and the miraculous precision of effects possible through selection alone.
I loved this post. Its overall presentation felt like a text version of a Christopher Nolan mind-bender.
The crescendo of clues about the nature of the spoiler: misattributed/fictional quotes; severe link rot even though the essay was just freshly published; the apparently ‘overwhelming’ level of academic, sophisticated writing style and range of references; the writing getting more chaotic as it talks about itself getting more chaotic. And of course, the constant question of what sort of spoiler could possibly ‘alter’ the meaning of the entire essay.
I loved the feeling of Inception near the end of the essay when, in the analyst’s voice, it confirms the reader’s likely prediction that it was written by AI, only to reveal how that ‘analyst’ section was also written by AI. Or rather that the voice fluidly changes between AI and analyst, first- and third-person. And when you finally feel like you’re on solid ground, the integrity of the essay breaks down; “<!--TODO-->” tags make you contend with how no part is certainly all-human or all-AI, and so, does it even matter who wrote it.
Returning to the spoiler and initial paragraphs after finishing the essay, and getting a profound, contextualized appreciation for what it means. You realize that the essay achieved what it told you it set out to; to convey a salient point through apparent nonsense, validating that such nonsense can be useful, as it explains the process of generating the nonsense. Or in the essay’s words, “[the] string of text can talk about itself [as it] unmask[s] the code hidden within itself.”
The post also shared concepts I now use when thinking about language. My favourite being quantum poetry, associating the artificial (and ‘next-token prediction’) to the humanistic:
My main response to the essay’s content, is that still, a human in the loop seemed to still be the primary engine for most of the art in the essay. From my understanding of critical rationalism, personhood is mapped to the ability to creatively conjecture and criticize ideas to generate testable, hard to vary explanations of things.
This essay depended on a human analyst to evaluate and criticize (by some sense of ‘relevance’) which generation was valid enough to continue into the main branch of the essay. The essay also depended on a human to decide which original conjecture to write about (also by some sense of what’s ‘interesting’).
Therefore, it seems to me that AGI is still far from automating both of humans’ conjecture and criticism capacities. However, the holistic artistry the essay did push me to consider AGI’s validity more than other text I’ve read, and in that sense, it achieved what it meant to: to connect my prior thoughts to some new idea—both in the real domain—through ‘babble’ of the the imaginary domain.
Thank you so much for the intricate review. I’m glad that someone was able to appreciate the essay in the ways that I did.
I agree with your conclusion. The content of this essay is very much due to me, even though I wrote almost none of the words. Most of the ideas in this post are mine—or too like mine to have been an accident—even though I never “told” the AI about them. If you haven’t, you might be interested to read the appendix of this post, where I describe the method by which I steer GPT, and the miraculous precision of effects possible through selection alone.