Do I understand correctly that right now there are no less complicated problems or problems for regular people in AGI? Nothing, where I could develop my skills which could be useful even for other ways than taking a PhD and do heavy research with AGI world leaders?
What you’re looking for is work within a research program (in the philosophy of science sense, not in the organizational sense). You get a research program when you’ve figured out the basic paradigms of your field and have established that you can get meaningful progress by following those. Then you can get work like “grow these microbial strains in petri dishes and document which of these chemicals kills them fastest” or “work on this approximation algorithm for a special case of graph search and see if you can shave off a fraction from the exponent in the complexity class”.
The problem with AGI is that nobody really has an idea on the basic principles to build an expansive research program like that on yet. The work is basically about just managing to be clever and knowledgeable enough in all the right ways so that you have some chance to luck out into actually working on something that ends up making progress. It’s like trying to study electromagnetism in 1820 instead of today, or classical mechanics in 1420 instead of today.
Also, since there’s no consensus on what will work, the field is full of semi-crackpots like Selmer Bringsjord, so even if you do manage to get doing academic research with someone credentialed and working on AGI, chances are you’ve found someone running an obviously dead-ended avenue of research and end up with a PhD on the phenomenological analysis of modal logics for hypercomputation in Brooksian robotics, and being even more useless for anyone with actual chances of developing an AGI than you were before you even got started. I’m not even sure if the problem is just “there are some cranks around in academic AGI research and you should avoid them”, it might be “academic AGI research is currently considered a dead-end field and so most of the people who end up there will be useless cranks.”
If there’s no entry level in AGI, the best thing to do is to try to figure out what the people who actually seem to be doing something promising in AI (AIXI, Watson, Google’s self-driving cars) were doing as their entry-level disciplines. My guess is lots and lots of theoretical computer science and math.
It is an open problem, so there’s no guarantee that what gets the most impressive results today will get the most impressive results 5 years from now. And we don’t even know which of the current research directions will end up actually being on the right track. But whoever is making progress in AGI in 5 years is probably going to be someone who can and does understand what’s going on in today’s state-of-the-art.
What you’re looking for is work within a research program (in the philosophy of science sense, not in the organizational sense). You get a research program when you’ve figured out the basic paradigms of your field and have established that you can get meaningful progress by following those. Then you can get work like “grow these microbial strains in petri dishes and document which of these chemicals kills them fastest” or “work on this approximation algorithm for a special case of graph search and see if you can shave off a fraction from the exponent in the complexity class”.
The problem with AGI is that nobody really has an idea on the basic principles to build an expansive research program like that on yet. The work is basically about just managing to be clever and knowledgeable enough in all the right ways so that you have some chance to luck out into actually working on something that ends up making progress. It’s like trying to study electromagnetism in 1820 instead of today, or classical mechanics in 1420 instead of today.
Also, since there’s no consensus on what will work, the field is full of semi-crackpots like Selmer Bringsjord, so even if you do manage to get doing academic research with someone credentialed and working on AGI, chances are you’ve found someone running an obviously dead-ended avenue of research and end up with a PhD on the phenomenological analysis of modal logics for hypercomputation in Brooksian robotics, and being even more useless for anyone with actual chances of developing an AGI than you were before you even got started. I’m not even sure if the problem is just “there are some cranks around in academic AGI research and you should avoid them”, it might be “academic AGI research is currently considered a dead-end field and so most of the people who end up there will be useless cranks.”
If there’s no entry level in AGI, the best thing to do is to try to figure out what the people who actually seem to be doing something promising in AI (AIXI, Watson, Google’s self-driving cars) were doing as their entry-level disciplines. My guess is lots and lots of theoretical computer science and math.
It is an open problem, so there’s no guarantee that what gets the most impressive results today will get the most impressive results 5 years from now. And we don’t even know which of the current research directions will end up actually being on the right track. But whoever is making progress in AGI in 5 years is probably going to be someone who can and does understand what’s going on in today’s state-of-the-art.