You might want to try to build yourself a T-shaped skillset on the relevant disciplines, based on MIRI’s course recommendation list for example. Meaning that you’ll try to master one part of the domain well enough that you could eventually do a PhD on it, and have enough awareness on the rest to be reasonably conversant about it.
My impression on that stuff is that if you want to be serious about it, most of it is quite heavy going compared to general STEM undergraduate fare. You’ll probably want to be either the sort of good at math who regularly leaves “is good at math” people in the dust or be prepared to work quite hard.
You’ll probably want to be either the sort of good at math who regularly leaves “is good at math” people in the dust or be prepared to work quite hard.
At least if you’re going by the “AGI is all about math” route. If one takes the “AGI is more about cognitive science and psychology” approach, then they don’t necessarily need to be quite that good at math, though a basic competence is still an absolute must.
Could you redirect me to somewhere, where I could find what problems/directions are you talking about? Since I’m not so shining mathematician, maybe I could contribute in these areas, which I found similar interesting.
Have there been any significant advances in AI or AGI theory so far made by people from cognitive science or psychology background who didn’t also have very strong math or computer science skills? It’s a bit worrisome when Douglas Hofstadter comes to mind as the paradigmatic example of this approach, and he seems to have achieved nothing worth writing home about during a 30+-year career. Not to mention that he did have strong enough math skills to initially do a PhD on theoretical physics.
That’s hard to answer, given that there’s no general agreement of what would count as a significant advance in AGI theory. Something like LIDA feels like it could possibly be important and useful for AGI, but also maybe not. The Global Workspace Theory behind it does seem important, though. Also various other neuroscience work like the predictive coding hypothesis of the brain seems plausibly important.
So far I’d count AIXI and whatever went into building IBM Watson (incidentally, what did go into building it, is there a summary somewhere about what you’d want to study if you wanted to end up capable of working on something like that?) as reasonably significant steps. AIXI is pure compsci, and I haven’t heard anything about insights from cognitive science playing a big part in getting Watson working compared to plain old math and engineering effort.
I’d count the predictive coding model and probably also GWT as larger steps than AIXI. I’m not sure where I’d put Watson.
incidentally, what did go into building it, is there a summary somewhere about what you’d want to study if you wanted to end up capable of working on something like that?
Here is a paper about how Watson works in general, and here’s another about how it reads a clue. (Unsurprisingly, machine learning, natural language processing, and statistics skills seem relevant.)
I don’t know how I could miss MIRI’s course recommendation list. It looks great. Will definitely take a closer look at it.
Second part is a bit disappointment for me, since I’m not that kind of student. I’m in the stronger group of mathematicians in my university, but in that group I’m in or below average (they are one of the best in my country).
Maybe I put too much weight too maths part of AGI, which are obviously aren’t for me. And I’m not sure about taking PhD in it right now also. Do I understand correctly that right now there are no less complicated problems or problems for regular people in AGI? Nothing, where I could develop my skills which could be useful even for other ways than taking a PhD and do heavy research with AGI world leaders?
Do I understand correctly that right now there are no less complicated problems or problems for regular people in AGI? Nothing, where I could develop my skills which could be useful even for other ways than taking a PhD and do heavy research with AGI world leaders?
What you’re looking for is work within a research program (in the philosophy of science sense, not in the organizational sense). You get a research program when you’ve figured out the basic paradigms of your field and have established that you can get meaningful progress by following those. Then you can get work like “grow these microbial strains in petri dishes and document which of these chemicals kills them fastest” or “work on this approximation algorithm for a special case of graph search and see if you can shave off a fraction from the exponent in the complexity class”.
The problem with AGI is that nobody really has an idea on the basic principles to build an expansive research program like that on yet. The work is basically about just managing to be clever and knowledgeable enough in all the right ways so that you have some chance to luck out into actually working on something that ends up making progress. It’s like trying to study electromagnetism in 1820 instead of today, or classical mechanics in 1420 instead of today.
Also, since there’s no consensus on what will work, the field is full of semi-crackpots like Selmer Bringsjord, so even if you do manage to get doing academic research with someone credentialed and working on AGI, chances are you’ve found someone running an obviously dead-ended avenue of research and end up with a PhD on the phenomenological analysis of modal logics for hypercomputation in Brooksian robotics, and being even more useless for anyone with actual chances of developing an AGI than you were before you even got started. I’m not even sure if the problem is just “there are some cranks around in academic AGI research and you should avoid them”, it might be “academic AGI research is currently considered a dead-end field and so most of the people who end up there will be useless cranks.”
If there’s no entry level in AGI, the best thing to do is to try to figure out what the people who actually seem to be doing something promising in AI (AIXI, Watson, Google’s self-driving cars) were doing as their entry-level disciplines. My guess is lots and lots of theoretical computer science and math.
It is an open problem, so there’s no guarantee that what gets the most impressive results today will get the most impressive results 5 years from now. And we don’t even know which of the current research directions will end up actually being on the right track. But whoever is making progress in AGI in 5 years is probably going to be someone who can and does understand what’s going on in today’s state-of-the-art.
You might want to try to build yourself a T-shaped skillset on the relevant disciplines, based on MIRI’s course recommendation list for example. Meaning that you’ll try to master one part of the domain well enough that you could eventually do a PhD on it, and have enough awareness on the rest to be reasonably conversant about it.
My impression on that stuff is that if you want to be serious about it, most of it is quite heavy going compared to general STEM undergraduate fare. You’ll probably want to be either the sort of good at math who regularly leaves “is good at math” people in the dust or be prepared to work quite hard.
At least if you’re going by the “AGI is all about math” route. If one takes the “AGI is more about cognitive science and psychology” approach, then they don’t necessarily need to be quite that good at math, though a basic competence is still an absolute must.
Thank you for answer.
Could you redirect me to somewhere, where I could find what problems/directions are you talking about? Since I’m not so shining mathematician, maybe I could contribute in these areas, which I found similar interesting.
Have there been any significant advances in AI or AGI theory so far made by people from cognitive science or psychology background who didn’t also have very strong math or computer science skills? It’s a bit worrisome when Douglas Hofstadter comes to mind as the paradigmatic example of this approach, and he seems to have achieved nothing worth writing home about during a 30+-year career. Not to mention that he did have strong enough math skills to initially do a PhD on theoretical physics.
That’s hard to answer, given that there’s no general agreement of what would count as a significant advance in AGI theory. Something like LIDA feels like it could possibly be important and useful for AGI, but also maybe not. The Global Workspace Theory behind it does seem important, though. Also various other neuroscience work like the predictive coding hypothesis of the brain seems plausibly important.
So far I’d count AIXI and whatever went into building IBM Watson (incidentally, what did go into building it, is there a summary somewhere about what you’d want to study if you wanted to end up capable of working on something like that?) as reasonably significant steps. AIXI is pure compsci, and I haven’t heard anything about insights from cognitive science playing a big part in getting Watson working compared to plain old math and engineering effort.
I’d count the predictive coding model and probably also GWT as larger steps than AIXI. I’m not sure where I’d put Watson.
Here is a paper about how Watson works in general, and here’s another about how it reads a clue. (Unsurprisingly, machine learning, natural language processing, and statistics skills seem relevant.)
Thanks for the answer.
I don’t know how I could miss MIRI’s course recommendation list. It looks great. Will definitely take a closer look at it.
Second part is a bit disappointment for me, since I’m not that kind of student. I’m in the stronger group of mathematicians in my university, but in that group I’m in or below average (they are one of the best in my country).
Maybe I put too much weight too maths part of AGI, which are obviously aren’t for me. And I’m not sure about taking PhD in it right now also. Do I understand correctly that right now there are no less complicated problems or problems for regular people in AGI? Nothing, where I could develop my skills which could be useful even for other ways than taking a PhD and do heavy research with AGI world leaders?
Thanks
Alternative AGI course recommendation lists: one by Pei Wang, another by Ben Goertzel.
What you’re looking for is work within a research program (in the philosophy of science sense, not in the organizational sense). You get a research program when you’ve figured out the basic paradigms of your field and have established that you can get meaningful progress by following those. Then you can get work like “grow these microbial strains in petri dishes and document which of these chemicals kills them fastest” or “work on this approximation algorithm for a special case of graph search and see if you can shave off a fraction from the exponent in the complexity class”.
The problem with AGI is that nobody really has an idea on the basic principles to build an expansive research program like that on yet. The work is basically about just managing to be clever and knowledgeable enough in all the right ways so that you have some chance to luck out into actually working on something that ends up making progress. It’s like trying to study electromagnetism in 1820 instead of today, or classical mechanics in 1420 instead of today.
Also, since there’s no consensus on what will work, the field is full of semi-crackpots like Selmer Bringsjord, so even if you do manage to get doing academic research with someone credentialed and working on AGI, chances are you’ve found someone running an obviously dead-ended avenue of research and end up with a PhD on the phenomenological analysis of modal logics for hypercomputation in Brooksian robotics, and being even more useless for anyone with actual chances of developing an AGI than you were before you even got started. I’m not even sure if the problem is just “there are some cranks around in academic AGI research and you should avoid them”, it might be “academic AGI research is currently considered a dead-end field and so most of the people who end up there will be useless cranks.”
If there’s no entry level in AGI, the best thing to do is to try to figure out what the people who actually seem to be doing something promising in AI (AIXI, Watson, Google’s self-driving cars) were doing as their entry-level disciplines. My guess is lots and lots of theoretical computer science and math.
It is an open problem, so there’s no guarantee that what gets the most impressive results today will get the most impressive results 5 years from now. And we don’t even know which of the current research directions will end up actually being on the right track. But whoever is making progress in AGI in 5 years is probably going to be someone who can and does understand what’s going on in today’s state-of-the-art.