Your previous posts have been well-received, with averages of >2 karma per vote. This post GPT-3: a disappointing paper has received <12 karma per vote, due to a large quantity of downvotes. Your post is well-written, well-reasoned and civil. I don’t think GPT-3: a disappointing paper deserves downvotes.
I hope the feedback you received on this article doesn’t discourage you from continuing to write about the limits of today’s AI. Merely scaling up existing architectures won’t get us from AI to AGI. The world needs people on the lookout for the next paradigm shift.
Agreed! (Well, actually I do have like 10% credence that merely scaling up existing architectures will get us to AGI. But everything else I agree with.)
I don’t have a fleshed-out inside view on this; my credence is 10% for outside-view reasons. If somehow my job was to build AGI now (mind, I’m not an AI scientist) I’d try to combine GPT-3 with some sort of population-based reinforcement learning. Maybe the reward signal would come from chat interactions with human users (I’m assuming I work for Facebook or something and have access to millions of users willing to talk to my chatbot for free, plus huge amounts of data to get started). Idk, what would your answer be?
It’s hard to go much lower than 10% uncertainty on anything like this without specialized domain knowledge. I’m in a different position. I’m CTO of an AI startup I founded so I get a little bit of an advantage from our private technologies.
If I had to restrict myself to public knowledge then I’d look for a good predictive processing algorithm and then plug it into the harmonic wave theory of neuroscience. Admittedly, this stretches the meaning of “existing architectures”.
Your previous posts have been well-received, with averages of >2 karma per vote. This post GPT-3: a disappointing paper has received <12 karma per vote, due to a large quantity of downvotes. Your post is well-written, well-reasoned and civil. I don’t think GPT-3: a disappointing paper deserves downvotes.
I wouldn’t have noticed the downvotes if my own most heavily downvoted post hadn’t also cruxed around bottom-up few-shot learning.
I hope the feedback you received on this article doesn’t discourage you from continuing to write about the limits of today’s AI. Merely scaling up existing architectures won’t get us from AI to AGI. The world needs people on the lookout for the next paradigm shift.
Agreed! (Well, actually I do have like 10% credence that merely scaling up existing architectures will get us to AGI. But everything else I agree with.)
What existing architectures you would bet on, if you had to?
I don’t have a fleshed-out inside view on this; my credence is 10% for outside-view reasons. If somehow my job was to build AGI now (mind, I’m not an AI scientist) I’d try to combine GPT-3 with some sort of population-based reinforcement learning. Maybe the reward signal would come from chat interactions with human users (I’m assuming I work for Facebook or something and have access to millions of users willing to talk to my chatbot for free, plus huge amounts of data to get started). Idk, what would your answer be?
It’s hard to go much lower than 10% uncertainty on anything like this without specialized domain knowledge. I’m in a different position. I’m CTO of an AI startup I founded so I get a little bit of an advantage from our private technologies.
If I had to restrict myself to public knowledge then I’d look for a good predictive processing algorithm and then plug it into the harmonic wave theory of neuroscience. Admittedly, this stretches the meaning of “existing architectures”.