That was a type of savant that I thought couldn’t happen before AGI.
This interests me (as someone professionally involved in the creation of savants, though not linguistic ones). Can you articulate why you thought that?
It wasn’t formalised thinking. I bought into the idea of AI-complete problems, ie that there were certain problems that only a true AI could solve—and that if it could, it could also solve all others. I was also informally thinking that linguistic ability was the queen of all human skills (influenced by the Turing test itself and by the continuous failure of chatterbots). Finally, I wasn’t cognisant of the possibilities of Big Data to solve these narrow problems by (clever) brute force. So I had the image of a true AI being defined by the ability to demonstrate human-like ability on linguistic problems.
This interests me (as someone professionally involved in the creation of savants, though not linguistic ones). Can you articulate why you thought that?
It wasn’t formalised thinking. I bought into the idea of AI-complete problems, ie that there were certain problems that only a true AI could solve—and that if it could, it could also solve all others. I was also informally thinking that linguistic ability was the queen of all human skills (influenced by the Turing test itself and by the continuous failure of chatterbots). Finally, I wasn’t cognisant of the possibilities of Big Data to solve these narrow problems by (clever) brute force. So I had the image of a true AI being defined by the ability to demonstrate human-like ability on linguistic problems.