GPT-4 (Edited because I actually realize I put way more than 5% weight on the original phrasing): SOTA on language translation for every language (not just English/French and whatever else GPT-3 has), without fine-tuning.
Not GPT-4 specifically, assuming they keep the focus on next-token prediction of all human text, but “around the time of GPT-4”: Superhuman theorem proving. I expect one of the millennium problems to be solved by an AI sometime in the next 5 years.
AI solving a millennium problem within a decade would be truly shocking, IMO. That’s the kind of thing I wouldn’t expect to see before AGI is the world superpower. My best guess coming from a mathematics background is that dominating humanity is an easier problem to for an AI.
That’s what people used to say about chess and go. Yes, mathematics requires intuition, but so does chess; the game tree’s too big to be explored fully.
Mathematics requires greater intuition and has a much broader and deeper “game” tree, but once we figure out the analogue to self-play, I think it will quickly surpass human mathematicians.
GPT-4 (Edited because I actually realize I put way more than 5% weight on the original phrasing): SOTA on language translation for every language (not just English/French and whatever else GPT-3 has), without fine-tuning.
Not GPT-4 specifically, assuming they keep the focus on next-token prediction of all human text, but “around the time of GPT-4”: Superhuman theorem proving. I expect one of the millennium problems to be solved by an AI sometime in the next 5 years.
AI solving a millennium problem within a decade would be truly shocking, IMO. That’s the kind of thing I wouldn’t expect to see before AGI is the world superpower. My best guess coming from a mathematics background is that dominating humanity is an easier problem to for an AI.
That’s what people used to say about chess and go. Yes, mathematics requires intuition, but so does chess; the game tree’s too big to be explored fully.
Mathematics requires greater intuition and has a much broader and deeper “game” tree, but once we figure out the analogue to self-play, I think it will quickly surpass human mathematicians.
Sure. I’m not saying it won’t happen, just that an AI will already be transformative before it does happen.
I agree that before that point, an AI will be transformative, but not to the point of “AGI is the world superpower”.