I think this is a legitimate concern. It’s probably not a significant issue right now, but definitely would be one if SIAI started making dramatic progress towards AGI. I don’t think it deserves the downvotes its getting.
It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn’t also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.
I am more concerned about the possibility that random employees at Google will succeed in making an AGI then I am at SIAI constructing one. To begin with, even if there were only 1000 employees at Google that were interested in AGI and they were only interested in it enough to work 1 hour a month each and they were only 80% as effective as Eliezer (as being some of the smartest people in the world doesn’t quite put them on the same level as Eli) then if Eliezer will have AGI in say, 2031 then Google will have it in about 2017.
Personally, I expect even moderately complicated problems—especially novel ones—to not scale or decompose at all cleanly.
So, leaving aside all questions about who is smarter than whom, I don’t expect a thousand smart people working an hour a month on a project to be nearly as productive as one smart person working eight hours a day.
If you could share your reasons for expecting otherwise, I might find them enlightening.
The idea is that they are sharing their information and findings so that while they are less efficient then working constantly on the problem they are able to point out possible solutions to each other that one person working by himself would be less likely to notice except through a longer process. As there would be between 4-5 people working on the project at any one time during the month I assume they would be working in a group and would stagger the times such that a nearly continuous effort is produced. Also, as much of the problem involves thinking about things, by not focusing on the issue constantly they may be more likely to come up with a solution then if they are focusing on it constantly.
This is a hypothetical, I have no idea how many people at Google are interested in AI or how much time they spend on it. I would imagine that there most likely are quite a few people at Google working on AGI as it relates directly to Google’s core business and that they work on it significantly more than one hour a month each.
(edit the comment with intelligence and Eli was a pun.)
I think this is a legitimate concern. It’s probably not a significant issue right now, but definitely would be one if SIAI started making dramatic progress towards AGI. I don’t think it deserves the downvotes its getting.
Note: the comment has been completely rewritten since the original wave of downvoting. It’s much better now.
I agree, this doesn’t deserve to be downvoted.
It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn’t also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.
I am more concerned about the possibility that random employees at Google will succeed in making an AGI then I am at SIAI constructing one. To begin with, even if there were only 1000 employees at Google that were interested in AGI and they were only interested in it enough to work 1 hour a month each and they were only 80% as effective as Eliezer (as being some of the smartest people in the world doesn’t quite put them on the same level as Eli) then if Eliezer will have AGI in say, 2031 then Google will have it in about 2017.
Personally, I expect even moderately complicated problems—especially novel ones—to not scale or decompose at all cleanly.
So, leaving aside all questions about who is smarter than whom, I don’t expect a thousand smart people working an hour a month on a project to be nearly as productive as one smart person working eight hours a day.
If you could share your reasons for expecting otherwise, I might find them enlightening.
The idea is that they are sharing their information and findings so that while they are less efficient then working constantly on the problem they are able to point out possible solutions to each other that one person working by himself would be less likely to notice except through a longer process. As there would be between 4-5 people working on the project at any one time during the month I assume they would be working in a group and would stagger the times such that a nearly continuous effort is produced. Also, as much of the problem involves thinking about things, by not focusing on the issue constantly they may be more likely to come up with a solution then if they are focusing on it constantly.
This is a hypothetical, I have no idea how many people at Google are interested in AI or how much time they spend on it. I would imagine that there most likely are quite a few people at Google working on AGI as it relates directly to Google’s core business and that they work on it significantly more than one hour a month each.
(edit the comment with intelligence and Eli was a pun.)
I don’t get it. I can haz Xplanation?
The word Eli can also be used for god, hence the pun.
Oh :-)