You’ll probably want to be either the sort of good at math who regularly leaves “is good at math” people in the dust or be prepared to work quite hard.
At least if you’re going by the “AGI is all about math” route. If one takes the “AGI is more about cognitive science and psychology” approach, then they don’t necessarily need to be quite that good at math, though a basic competence is still an absolute must.
Could you redirect me to somewhere, where I could find what problems/directions are you talking about? Since I’m not so shining mathematician, maybe I could contribute in these areas, which I found similar interesting.
Have there been any significant advances in AI or AGI theory so far made by people from cognitive science or psychology background who didn’t also have very strong math or computer science skills? It’s a bit worrisome when Douglas Hofstadter comes to mind as the paradigmatic example of this approach, and he seems to have achieved nothing worth writing home about during a 30+-year career. Not to mention that he did have strong enough math skills to initially do a PhD on theoretical physics.
That’s hard to answer, given that there’s no general agreement of what would count as a significant advance in AGI theory. Something like LIDA feels like it could possibly be important and useful for AGI, but also maybe not. The Global Workspace Theory behind it does seem important, though. Also various other neuroscience work like the predictive coding hypothesis of the brain seems plausibly important.
So far I’d count AIXI and whatever went into building IBM Watson (incidentally, what did go into building it, is there a summary somewhere about what you’d want to study if you wanted to end up capable of working on something like that?) as reasonably significant steps. AIXI is pure compsci, and I haven’t heard anything about insights from cognitive science playing a big part in getting Watson working compared to plain old math and engineering effort.
I’d count the predictive coding model and probably also GWT as larger steps than AIXI. I’m not sure where I’d put Watson.
incidentally, what did go into building it, is there a summary somewhere about what you’d want to study if you wanted to end up capable of working on something like that?
Here is a paper about how Watson works in general, and here’s another about how it reads a clue. (Unsurprisingly, machine learning, natural language processing, and statistics skills seem relevant.)
At least if you’re going by the “AGI is all about math” route. If one takes the “AGI is more about cognitive science and psychology” approach, then they don’t necessarily need to be quite that good at math, though a basic competence is still an absolute must.
Thank you for answer.
Could you redirect me to somewhere, where I could find what problems/directions are you talking about? Since I’m not so shining mathematician, maybe I could contribute in these areas, which I found similar interesting.
Have there been any significant advances in AI or AGI theory so far made by people from cognitive science or psychology background who didn’t also have very strong math or computer science skills? It’s a bit worrisome when Douglas Hofstadter comes to mind as the paradigmatic example of this approach, and he seems to have achieved nothing worth writing home about during a 30+-year career. Not to mention that he did have strong enough math skills to initially do a PhD on theoretical physics.
That’s hard to answer, given that there’s no general agreement of what would count as a significant advance in AGI theory. Something like LIDA feels like it could possibly be important and useful for AGI, but also maybe not. The Global Workspace Theory behind it does seem important, though. Also various other neuroscience work like the predictive coding hypothesis of the brain seems plausibly important.
So far I’d count AIXI and whatever went into building IBM Watson (incidentally, what did go into building it, is there a summary somewhere about what you’d want to study if you wanted to end up capable of working on something like that?) as reasonably significant steps. AIXI is pure compsci, and I haven’t heard anything about insights from cognitive science playing a big part in getting Watson working compared to plain old math and engineering effort.
I’d count the predictive coding model and probably also GWT as larger steps than AIXI. I’m not sure where I’d put Watson.
Here is a paper about how Watson works in general, and here’s another about how it reads a clue. (Unsurprisingly, machine learning, natural language processing, and statistics skills seem relevant.)