There are really many things I found outstanding about this post. The key one, however, is that after reading this, I feel less confused when thinking about transformer language models. The post had that taste of deconfusion where many of the arguments are elegant, and simple; like suddenly tilting a bewildering shape into place. I particularly enjoyed the discussion of ways agency does and does not manifest within a simulator (multiple agents, irrational agents, non-agentic processes), the formulation of the prediction orthogonality thesis, ways in which some prior alignment work (e.g. Bostrom’s tool-oracle-genie-sovereign typology) does not carve at the joints of the abstraction most helpful for thinking about GPT; and how it all grounded out in arguments from technical details of GPT (e.g. the absence of recursive prompting in the training set and its implications for the agency of the simulator).
I also want to curate this piece for its boldness. It strikes at finding a True Name in a domain of messy blobs of matrices, and uses the “simulator” abstraction to suggest a number of directions I found myself actively curious and cautiously optimistic about. I very much look forward to seeing further posts from janus and others who explore and play around with the Simulator abstraction in the context of large language models.
Thank you for this lovely comment. I’m pleasantly surprised that people were able to get so much out of it.
As I wrote in the post, I wasn’t sure if I’d ever get around to publishing the rest of the sequence, but the reception so far has caused me to bump up the priority of that.
Curated.
There are really many things I found outstanding about this post. The key one, however, is that after reading this, I feel less confused when thinking about transformer language models. The post had that taste of deconfusion where many of the arguments are elegant, and simple; like suddenly tilting a bewildering shape into place. I particularly enjoyed the discussion of ways agency does and does not manifest within a simulator (multiple agents, irrational agents, non-agentic processes), the formulation of the prediction orthogonality thesis, ways in which some prior alignment work (e.g. Bostrom’s tool-oracle-genie-sovereign typology) does not carve at the joints of the abstraction most helpful for thinking about GPT; and how it all grounded out in arguments from technical details of GPT (e.g. the absence of recursive prompting in the training set and its implications for the agency of the simulator).
I also want to curate this piece for its boldness. It strikes at finding a True Name in a domain of messy blobs of matrices, and uses the “simulator” abstraction to suggest a number of directions I found myself actively curious and cautiously optimistic about. I very much look forward to seeing further posts from janus and others who explore and play around with the Simulator abstraction in the context of large language models.
Thank you for this lovely comment. I’m pleasantly surprised that people were able to get so much out of it.
As I wrote in the post, I wasn’t sure if I’d ever get around to publishing the rest of the sequence, but the reception so far has caused me to bump up the priority of that.