Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don’t exist, and so I look for proof of relevant talent and learning rate.
I’m not sure the olympiads are such a uniquely optimal selector. For sure there were lots of superstars at the IOI, but now doing a phd makes me realise that many of those small-scale problem solving skills don’t necessarily transfer to broader-scale AI research (putting together a body of work, seeing analogies between different theories, predicting which research direction will be most fruitful). Equally I met a ton of superstars working at Google, and I mean deeply brilliant superstars, not just well-trained professional coders. Google is trying to attract much the same crowd as SIAI, but they have a ton more resources, so insofar as it’s possible it makes sense to try to recruit people from Google.
It would be nice if we could get both groups (international olympiads and Google) reading relevant articles, and thinking about rationality and existential risk. Any thoughts here, alexflint or others?
Well for the olympiads, each country runs training camp leading up to the actual olympiad and they’d probably be more than happy to have someone from SIAI give a guest lecture. These kids would easily pick up the whole problem from a half hour talk.
Google also has guest speakers and someone from SIAI could certainly go along and give a talk. It’s a much more difficult nut to crack as Google has a somewhat insular culture and they’re constantly dealing with overblown hype so many may tune out as soon as something that sounds too “futuristic” comes up.
Re: the national olympiad training camps, my guess is that it is easier to talk if an alumnus of the program recommends us. We know alumni of the US math olympiad camp, and the US computing olympiad camp, but to my knowledge we don’t know alumni from any of the other countries or from other subjects. Do you have connections there, Alex? Anyone else?
What about reaching out to people who scored very highly when taking the SATs as 7th graders? Duke sells the names and info of the test-takers to those that can provide “a unique educational opportunity.”
Thinking about this point is leading me to conclude that Google is substantially more likely than SIAI to develop a General AI before anyone else. Gintelligence anyone?
Well, I don’t think Google is working on GAI explicitly (though I wouldn’t know), and I think they’re not working on it for much the same reason that most research labs aren’t working on it: it’s difficult, risky research, outside the mainstream dogma, and most people don’t put very much thought into the implications.
I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.
(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn’t be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.
2⁄3 are high because Google’s resources, their brand/reputation and the fact that they’ve shown they are capable of completing and deploying innovative code and business ideas.
All of the above is said with very low confidence.
Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we’d all be screwed.
Edit: I knew this would get downvoted :-)… or not.
Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.
I’m not sure the olympiads are such a uniquely optimal selector. For sure there were lots of superstars at the IOI, but now doing a phd makes me realise that many of those small-scale problem solving skills don’t necessarily transfer to broader-scale AI research (putting together a body of work, seeing analogies between different theories, predicting which research direction will be most fruitful). Equally I met a ton of superstars working at Google, and I mean deeply brilliant superstars, not just well-trained professional coders. Google is trying to attract much the same crowd as SIAI, but they have a ton more resources, so insofar as it’s possible it makes sense to try to recruit people from Google.
It would be nice if we could get both groups (international olympiads and Google) reading relevant articles, and thinking about rationality and existential risk. Any thoughts here, alexflint or others?
Well for the olympiads, each country runs training camp leading up to the actual olympiad and they’d probably be more than happy to have someone from SIAI give a guest lecture. These kids would easily pick up the whole problem from a half hour talk.
Google also has guest speakers and someone from SIAI could certainly go along and give a talk. It’s a much more difficult nut to crack as Google has a somewhat insular culture and they’re constantly dealing with overblown hype so many may tune out as soon as something that sounds too “futuristic” comes up.
What do you think?
Yes, those seem worth doing.
Re: the national olympiad training camps, my guess is that it is easier to talk if an alumnus of the program recommends us. We know alumni of the US math olympiad camp, and the US computing olympiad camp, but to my knowledge we don’t know alumni from any of the other countries or from other subjects. Do you have connections there, Alex? Anyone else?
What about reaching out to people who scored very highly when taking the SATs as 7th graders? Duke sells the names and info of the test-takers to those that can provide “a unique educational opportunity.”
http://www.tip.duke.edu/talent_searches/faqs/grade_7.html#release
Sure, but only in Australia I’m afraid :). If there’s anyone from SIAI in that part of the world then I’m happy to put them in contact.
Thinking about this point is leading me to conclude that Google is substantially more likely than SIAI to develop a General AI before anyone else. Gintelligence anyone?
Well, I don’t think Google is working on GAI explicitly (though I wouldn’t know), and I think they’re not working on it for much the same reason that most research labs aren’t working on it: it’s difficult, risky research, outside the mainstream dogma, and most people don’t put very much thought into the implications.
I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.
(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn’t be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.
2⁄3 are high because Google’s resources, their brand/reputation and the fact that they’ve shown they are capable of completing and deploying innovative code and business ideas.
All of the above is said with very low confidence.
Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we’d all be screwed.
Edit: I knew this would get downvoted :-)… or not.
I voted up. I think you may be mistaken but you are looking at relevant calculations.
Nice.
Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.