My impression is that Eliezer is sufficiently capable that he would be able to identify skilled programmers (I can not guess about others in SIAI). More importantly he is sufficiently well connected that he would be able to hire someone who is fairly well recognised as an expert programmer specifically to consult for the task of identifying and recruiting less well known but more available talent to work at SIAI.
Someone like, say Paul Graham (apparently a skilled programmer, definitely skilled at identifying talent) could perhaps be persuaded to consult.
But the real point is that the task of getting skilled programmers pales in comparison to the task of find people to do AI theory. That is really hard. Implementing the results of the theory is child’s play by comparison.
But the real point is that the task of getting skilled programmers pales in comparison to the task of find people to do AI theory. That is really hard. Implementing the results of the theory is child’s play by comparison.
At the risk of repeating myself, I’m not someone who can’t (ETA I meant can not) really judge this. I have my doubts however that AI theory is a conceptual problem that can be solved by thinking about it. My guess is that you’ll have to do hard science to make progress on the subject. Do research on the human brain, run large scale experiments on supercomputers, build or invent specially optimized hardware, coding and debugging and so on. That it just needs some smarts to come up with a few key insights, I don’t see that. How do people arrive at that conclusion?
The task of getting skilled programmers is probably part of the solution, because resources, intellectual and otherwise, might be instrumental to make progress on the problem.
That it just needs some smarts to come up with a few key insights
It is amazing how much difference words like ‘just’ and ‘a few’ can make! This is an extremely hard problem. All sorts of other skills are required but those skills are commodities. They already exist, people have them, you buy them.
What is required to solve something like AIs that are stable when upgrading is extremely intelligent individuals studying the best work in several related fields full time for 15 years… and then having ‘a few insights’.
I think that XiXiDu’s point is that the theory and implementation can not cleanly be divorced. You may need to be constantly programming and trying out the ideas your theory spit out in order to guide and shape the theory to its final, correct form. We can’t necessarily just wait until the theory is developed and then buy the available skill needed to implement it.
That it just needs some smarts to come up with a few key insights, I don’t see that. How do people arrive at that conclusion?
Well nobody really knows, one way or the other.
So far, machine intelligence has mostly been 60 years of sluggish, gradual progress. On the other hand, we have a pretty neat theory of forecasting—which is the guts of the problem. Maybe we have been doing it all wrong—and there’s a silver bullet that nobody has stumbled across yet.
My impression is that Eliezer is sufficiently capable that he would be able to identify skilled programmers (I can not guess about others in SIAI). More importantly he is sufficiently well connected that he would be able to hire someone who is fairly well recognised as an expert programmer specifically to consult for the task of identifying and recruiting less well known but more available talent to work at SIAI.
Someone like, say Paul Graham (apparently a skilled programmer, definitely skilled at identifying talent) could perhaps be persuaded to consult.
But the real point is that the task of getting skilled programmers pales in comparison to the task of find people to do AI theory. That is really hard. Implementing the results of the theory is child’s play by comparison.
At the risk of repeating myself, I’m not someone who can’t (ETA I meant can not) really judge this. I have my doubts however that AI theory is a conceptual problem that can be solved by thinking about it. My guess is that you’ll have to do hard science to make progress on the subject. Do research on the human brain, run large scale experiments on supercomputers, build or invent specially optimized hardware, coding and debugging and so on. That it just needs some smarts to come up with a few key insights, I don’t see that. How do people arrive at that conclusion?
The task of getting skilled programmers is probably part of the solution, because resources, intellectual and otherwise, might be instrumental to make progress on the problem.
It is amazing how much difference words like ‘just’ and ‘a few’ can make! This is an extremely hard problem. All sorts of other skills are required but those skills are commodities. They already exist, people have them, you buy them.
What is required to solve something like AIs that are stable when upgrading is extremely intelligent individuals studying the best work in several related fields full time for 15 years… and then having ‘a few insights’.
I think that XiXiDu’s point is that the theory and implementation can not cleanly be divorced. You may need to be constantly programming and trying out the ideas your theory spit out in order to guide and shape the theory to its final, correct form. We can’t necessarily just wait until the theory is developed and then buy the available skill needed to implement it.
Note that the full senctence was:
Well nobody really knows, one way or the other.
So far, machine intelligence has mostly been 60 years of sluggish, gradual progress. On the other hand, we have a pretty neat theory of forecasting—which is the guts of the problem. Maybe we have been doing it all wrong—and there’s a silver bullet that nobody has stumbled across yet.