They can’t use one improvement to fuel another, they would have to come up with the next one independently
I disagree.
Suppose an organization has developers who work in-house on their issue tracking system (there are several that do—mostly software companies).
An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).
Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.
I think that in this case, an organization’s self-improvement to the issue tracker source code recursively ‘fuels’ other improvements to the organization’s cognition.
The point isn’t that an AGI has or does not have certain skills. It’s that it has the ability to learn those skills. Deep Blue doesn’t have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.
Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?
Then they hire a skilled flute-player, right?
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
I think that in this case, an organization’s self-improvement to the issue tracker source code recursively ‘fuels’ other improvements to the organization’s cognition.
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.
I disagree.
Suppose an organization has developers who work in-house on their issue tracking system (there are several that do—mostly software companies).
An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).
Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.
I think that in this case, an organization’s self-improvement to the issue tracker source code recursively ‘fuels’ other improvements to the organization’s cognition.
Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?
Then they hire a skilled flute-player, right?
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw
I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.