I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.
(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn’t be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.
2⁄3 are high because Google’s resources, their brand/reputation and the fact that they’ve shown they are capable of completing and deploying innovative code and business ideas.
All of the above is said with very low confidence.
Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we’d all be screwed.
Edit: I knew this would get downvoted :-)… or not.
Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.
I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.
(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn’t be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.
2⁄3 are high because Google’s resources, their brand/reputation and the fact that they’ve shown they are capable of completing and deploying innovative code and business ideas.
All of the above is said with very low confidence.
Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we’d all be screwed.
Edit: I knew this would get downvoted :-)… or not.
I voted up. I think you may be mistaken but you are looking at relevant calculations.
Nice.
Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.