One thing conspicuously missing in the post is a way of improving fidelity of simulation without changing external training data, or relationship between the model and the external training data, which I think follows from self-supervised learning on summaries of dreams. There are many concepts of evaluation/summarization of text, so given a text it’s possible to formulate tuples (text, summary1, summary2, …) and do self-supervised learning on that, not just on text (evaluations/summaries are also texts, not just one-dimensional metrics). For proofs, summaries could judge their validity and relevance to some question or method, for games the fact of winning and of following certain rules (which is essentially enough to win games, but also play at a given level of skill, if that is in the summary). More generally, for informal text we could try to evaluate clarity of argument, correctness, honesty, being fictional, identities/descriptions of simulacra/objects in the dream, etc. Which GPT-3 has enough structure to ask for informally.
Learning on such evaluated/summarized dreams should improve ability to dream in a way that admits a given asked-for summary, ideally without changing the relationship between the model and the external training data. The improvement is from gaining experience with dreams of certain kind, from the model more closely anticipating the summaries of dreams of that kind, not from changing the way a simulator dreams in a systematic direction. But if the summaries are about a level of optimality of a dream in some respect, then learning on augmentation of dreams with such summaries can be used for optimization, by conditioning on the summaries. (This post describes something along these lines.)
And a simulacrum of a human being with sufficient fidelity goes most of the way to AGI alignment.
One thing conspicuously missing in the post is a way of improving fidelity of simulation without changing external training data, or relationship between the model and the external training data, which I think follows from self-supervised learning on summaries of dreams. There are many concepts of evaluation/summarization of text, so given a text it’s possible to formulate tuples (text, summary1, summary2, …) and do self-supervised learning on that, not just on text (evaluations/summaries are also texts, not just one-dimensional metrics). For proofs, summaries could judge their validity and relevance to some question or method, for games the fact of winning and of following certain rules (which is essentially enough to win games, but also play at a given level of skill, if that is in the summary). More generally, for informal text we could try to evaluate clarity of argument, correctness, honesty, being fictional, identities/descriptions of simulacra/objects in the dream, etc. Which GPT-3 has enough structure to ask for informally.
Learning on such evaluated/summarized dreams should improve ability to dream in a way that admits a given asked-for summary, ideally without changing the relationship between the model and the external training data. The improvement is from gaining experience with dreams of certain kind, from the model more closely anticipating the summaries of dreams of that kind, not from changing the way a simulator dreams in a systematic direction. But if the summaries are about a level of optimality of a dream in some respect, then learning on augmentation of dreams with such summaries can be used for optimization, by conditioning on the summaries. (This post describes something along these lines.)
And a simulacrum of a human being with sufficient fidelity goes most of the way to AGI alignment.