My intuition leads me to disagree with the suggestion that a small team might be better.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition. Do you rather trust 5 people (likely friends) or a hundred strangers working for fame and money? After all we’re talking about a project that will result in the implementation of a superhuman AI to destine the future of the universe. A handful of people might do anything, regardless of what they are signaling. But a hundred people are much harder to control. So the security argument runs both ways. The question is what will maximize the chance of success. Here I agree that it will take many more people than are currently working on the various problems.
I agree. But, with Luke, I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI. Only the second of those two questions requires security—there is no reason not to pursue Friendliness theory openly.
I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition.
The “smaller is better” idea seems linked to “security through obscurity”—a common term of ridicule in computer security circles.
The NSA manage to get away with some security through obscurity—but they are hardly a very small team.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition. Do you rather trust 5 people (likely friends) or a hundred strangers working for fame and money? After all we’re talking about a project that will result in the implementation of a superhuman AI to destine the future of the universe. A handful of people might do anything, regardless of what they are signaling. But a hundred people are much harder to control. So the security argument runs both ways. The question is what will maximize the chance of success. Here I agree that it will take many more people than are currently working on the various problems.
I agree. But, with Luke, I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI. Only the second of those two questions requires security—there is no reason not to pursue Friendliness theory openly.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
The “smaller is better” idea seems linked to “security through obscurity”—a common term of ridicule in computer security circles.
The NSA manage to get away with some security through obscurity—but they are hardly a very small team.