I tend to think that the broadest issues, such as those of meta-ethics, may be discussed by a wider professional community, though of course most meta-ethicists will have little to contribute (divine command theorists, for example). It may still be the case that a small team is best for solving more specific technical problems in programming the AI’s utility function and proving that it will not reprogram its own terminal values.
It may still be the case that a small team is best for solving more specific technical problems in programming the AI’s utility function and proving that it will not reprogram its own terminal values.
My intuition leads me to disagree with the suggestion that a small team might be better. The only conceivable (to me) advantage of keeping the team small would be to minimize the pedagogical effort of educating a large team on the subtleties of the problem and the technicalities of the chosen jargon. But my experience has been that investment in clarity of pedagogy yields dividends in your own understanding of the problem, even if you never get any work or useful ideas out of the yahoos you have trained. And, of course, you probably will get some useful ideas from those people. There are plenty of smart folks out there. Whole generations of them.
My intuition leads me to disagree with the suggestion that a small team might be better.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition. Do you rather trust 5 people (likely friends) or a hundred strangers working for fame and money? After all we’re talking about a project that will result in the implementation of a superhuman AI to destine the future of the universe. A handful of people might do anything, regardless of what they are signaling. But a hundred people are much harder to control. So the security argument runs both ways. The question is what will maximize the chance of success. Here I agree that it will take many more people than are currently working on the various problems.
I agree. But, with Luke, I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI. Only the second of those two questions requires security—there is no reason not to pursue Friendliness theory openly.
I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition.
The “smaller is better” idea seems linked to “security through obscurity”—a common term of ridicule in computer security circles.
The NSA manage to get away with some security through obscurity—but they are hardly a very small team.
I tend to think that the broadest issues, such as those of meta-ethics, may be discussed by a wider professional community, though of course most meta-ethicists will have little to contribute (divine command theorists, for example). It may still be the case that a small team is best for solving more specific technical problems in programming the AI’s utility function and proving that it will not reprogram its own terminal values.
But I don’t know what SIAI’s position is.
My intuition leads me to disagree with the suggestion that a small team might be better. The only conceivable (to me) advantage of keeping the team small would be to minimize the pedagogical effort of educating a large team on the subtleties of the problem and the technicalities of the chosen jargon. But my experience has been that investment in clarity of pedagogy yields dividends in your own understanding of the problem, even if you never get any work or useful ideas out of the yahoos you have trained. And, of course, you probably will get some useful ideas from those people. There are plenty of smart folks out there. Whole generations of them.
People argue that small is better for security reasons. But one could as well argue that large is better for security reasons because there exist more supervision and competition. Do you rather trust 5 people (likely friends) or a hundred strangers working for fame and money? After all we’re talking about a project that will result in the implementation of a superhuman AI to destine the future of the universe. A handful of people might do anything, regardless of what they are signaling. But a hundred people are much harder to control. So the security argument runs both ways. The question is what will maximize the chance of success. Here I agree that it will take many more people than are currently working on the various problems.
I agree. But, with Luke, I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI. Only the second of those two questions requires security—there is no reason not to pursue Friendliness theory openly.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
The “smaller is better” idea seems linked to “security through obscurity”—a common term of ridicule in computer security circles.
The NSA manage to get away with some security through obscurity—but they are hardly a very small team.