Thanks for suggesting “Speculations concerning the first ultraintelligent machine”. I knew about it only from the intelligence explosion quote and didn’t realize it said so much about probabilistic language modeling. It’s indeed ahead of its time and exactly the kind of thing I was looking for but couldn’t find w/r/t premonitions of AGI via SSL and/or neural language modeling.
I’m sure there’s a lot of relevant work throughout the ages (saw this tweet today: “any idea in machine learning must be invented three times, once in signal processing, once in physics and once in the soviet union”), it’s just that I’m unsure how to find it. Most people in the AI alignment space I’ve asked haven’t known of any prior work either. So I still think it’s true that “the space of large self-supervised models hasn’t received enough attention”. Whatever scattered prophetic works existed were not sufficiently integrated into the mainstream of AI or AI alignment discourse. The situation was that most of us were terribly unprepared for GPT. Maybe because of our “lack of scholarship”.
Of course, after GPT-3 everyone’s been talking about large self supervised models as a path or foundation of AGI. My observations of the lack of foresight on SSL was referring mainly to pre-GPT. & after GPT the ontological inertia of not talking about SSL means post-GPT discourse has been forced into clumsy frames.
I know about “The risks and opportunities of foundation models”—it’s a good overview of SSL capabilities and “next steps” but it’s still very present-day focused and descriptive rather than speculation in exploratory engineering vein, which I still feel is missing.
“Foundation models” has hundreds of references. Are there any in particular that you think are relevant?
Explanation for my strong downvote/disagreement: Sure, in the ideal world, this post would have a much better scholarship.
In the actual world, there are tradeoffs between the number of posts and the quality of scholarship. The cost is both the time and the fact that doing literature review is a chore. If you demand good scholarship, people will write slower/less. With some posts this is a good thing. With this post, I would rather have an attrocious scholarship and 1% higher chance of the sequence having one more post in it. (Hypothetical example. I expect the real tradeoffs are less favourable.)
Thanks for suggesting “Speculations concerning the first ultraintelligent machine”. I knew about it only from the intelligence explosion quote and didn’t realize it said so much about probabilistic language modeling. It’s indeed ahead of its time and exactly the kind of thing I was looking for but couldn’t find w/r/t premonitions of AGI via SSL and/or neural language modeling.
I’m sure there’s a lot of relevant work throughout the ages (saw this tweet today: “any idea in machine learning must be invented three times, once in signal processing, once in physics and once in the soviet union”), it’s just that I’m unsure how to find it. Most people in the AI alignment space I’ve asked haven’t known of any prior work either. So I still think it’s true that “the space of large self-supervised models hasn’t received enough attention”. Whatever scattered prophetic works existed were not sufficiently integrated into the mainstream of AI or AI alignment discourse. The situation was that most of us were terribly unprepared for GPT. Maybe because of our “lack of scholarship”.
Of course, after GPT-3 everyone’s been talking about large self supervised models as a path or foundation of AGI. My observations of the lack of foresight on SSL was referring mainly to pre-GPT. & after GPT the ontological inertia of not talking about SSL means post-GPT discourse has been forced into clumsy frames.
I know about “The risks and opportunities of foundation models”—it’s a good overview of SSL capabilities and “next steps” but it’s still very present-day focused and descriptive rather than speculation in exploratory engineering vein, which I still feel is missing.
“Foundation models” has hundreds of references. Are there any in particular that you think are relevant?
[Deleted]
Explanation for my strong downvote/disagreement:
Sure, in the ideal world, this post would have a much better scholarship.
In the actual world, there are tradeoffs between the number of posts and the quality of scholarship. The cost is both the time and the fact that doing literature review is a chore. If you demand good scholarship, people will write slower/less. With some posts this is a good thing. With this post, I would rather have an attrocious scholarship and 1% higher chance of the sequence having one more post in it. (Hypothetical example. I expect the real tradeoffs are less favourable.)