I don’t have a fleshed-out inside view on this; my credence is 10% for outside-view reasons. If somehow my job was to build AGI now (mind, I’m not an AI scientist) I’d try to combine GPT-3 with some sort of population-based reinforcement learning. Maybe the reward signal would come from chat interactions with human users (I’m assuming I work for Facebook or something and have access to millions of users willing to talk to my chatbot for free, plus huge amounts of data to get started). Idk, what would your answer be?
It’s hard to go much lower than 10% uncertainty on anything like this without specialized domain knowledge. I’m in a different position. I’m CTO of an AI startup I founded so I get a little bit of an advantage from our private technologies.
If I had to restrict myself to public knowledge then I’d look for a good predictive processing algorithm and then plug it into the harmonic wave theory of neuroscience. Admittedly, this stretches the meaning of “existing architectures”.
I don’t have a fleshed-out inside view on this; my credence is 10% for outside-view reasons. If somehow my job was to build AGI now (mind, I’m not an AI scientist) I’d try to combine GPT-3 with some sort of population-based reinforcement learning. Maybe the reward signal would come from chat interactions with human users (I’m assuming I work for Facebook or something and have access to millions of users willing to talk to my chatbot for free, plus huge amounts of data to get started). Idk, what would your answer be?
It’s hard to go much lower than 10% uncertainty on anything like this without specialized domain knowledge. I’m in a different position. I’m CTO of an AI startup I founded so I get a little bit of an advantage from our private technologies.
If I had to restrict myself to public knowledge then I’d look for a good predictive processing algorithm and then plug it into the harmonic wave theory of neuroscience. Admittedly, this stretches the meaning of “existing architectures”.