The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don’t have this same power, in that they can’t modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that’s not the same type of optimization power as an AGI would have.
Also:
When judging whether an entity has intelligence, we should consider only the skills relevant to the entity’s goals.
Not if you’re talking about general intelligence. Deep Blue isn’t an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.
Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.
In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.
In general, it is far easier to destroy than to create.
So I wouldn’t dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.
The argument in the post is not that AGI isn’t more powerful than organizations, it is that organizations are also very powerful, and probably sufficiently powerful that they will create huge issues before AGI creates huge issues.
I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.
You are claiming that organisations don’t improve? Or that they don’t improve themselves? Or that improving themselves doesn’t count as a form of recursion? None of these positions seems terribly defensible to me.
Organizations don’t have this same power, in that they can’t modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that’s not the same type of optimization power as an AGI would have.
I may be missing something, but...if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can’t the organization modify its own source code?
Of course, you run into some hardware and wetware constraints, but so does pure software.
Not if you’re talking about general intelligence. Deep Blue isn’t an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.
Fair enough. But then consider the following argument:
Suppose I have a general, self-modifying intelligence.
Suppose that the world is such that it is costly to develop and maintain new skills.
The intelligence has some goals.
If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.
At this point, the general intelligence would self-modify itself into a non-general intelligence.
By this logic, if an AGI had goals that weren’t so broad that they required the entire spectrum of possible skills, then it would immediately castrate itself of its generality.
if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can’t the organization modify its own source code?
Such an organisation can self-modify, but those modifications aren’t recursive. They can’t use one improvement to fuel another, they would have to come up with the next one independently (or if they could, it wouldn’t be nearly to the extent that an AGI could. If you want me to go into more detail with this, let me know).
If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.
The point isn’t that an AGI has or does not have certain skills. It’s that it has the ability to learn those skills. Deep Blue doesn’t have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.
They can’t use one improvement to fuel another, they would have to come up with the next one independently
I disagree.
Suppose an organization has developers who work in-house on their issue tracking system (there are several that do—mostly software companies).
An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).
Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.
I think that in this case, an organization’s self-improvement to the issue tracker source code recursively ‘fuels’ other improvements to the organization’s cognition.
The point isn’t that an AGI has or does not have certain skills. It’s that it has the ability to learn those skills. Deep Blue doesn’t have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.
Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?
Then they hire a skilled flute-player, right?
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
I think that in this case, an organization’s self-improvement to the issue tracker source code recursively ‘fuels’ other improvements to the organization’s cognition.
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.
Such an organisation can self-modify, but those modifications aren’t recursive. They can’t use one improvement to fuel another, they would have to come up with the next one independently
Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly.
(or if they could, it wouldn’t be nearly to the extent that an AGI could
So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.
The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight.
What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)
Some companies do exhibit exponential economic growth. Indeed the whole economy exhibits exponential growth—a few percent a year—as is well known. I don’t think you have thought your alleged corporate “shrinking” effect through.
The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don’t have this same power, in that they can’t modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that’s not the same type of optimization power as an AGI would have.
They can also replace the humans with machines, one human/task at a time. The process is called “automation”.
The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don’t have this same power, in that they can’t modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that’s not the same type of optimization power as an AGI would have.
Also:
Not if you’re talking about general intelligence. Deep Blue isn’t an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.
Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.
In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.
In general, it is far easier to destroy than to create.
So I wouldn’t dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.
The argument in the post is not that AGI isn’t more powerful than organizations, it is that organizations are also very powerful, and probably sufficiently powerful that they will create huge issues before AGI creates huge issues.
Yes. I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.
You are claiming that organisations don’t improve? Or that they don’t improve themselves? Or that improving themselves doesn’t count as a form of recursion? None of these positions seems terribly defensible to me.
I may be missing something, but...if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can’t the organization modify its own source code?
Of course, you run into some hardware and wetware constraints, but so does pure software.
Fair enough. But then consider the following argument:
Suppose I have a general, self-modifying intelligence.
Suppose that the world is such that it is costly to develop and maintain new skills.
The intelligence has some goals.
If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.
At this point, the general intelligence would self-modify itself into a non-general intelligence.
By this logic, if an AGI had goals that weren’t so broad that they required the entire spectrum of possible skills, then it would immediately castrate itself of its generality.
Does that mean it would no longer be a problem?
Such an organisation can self-modify, but those modifications aren’t recursive. They can’t use one improvement to fuel another, they would have to come up with the next one independently (or if they could, it wouldn’t be nearly to the extent that an AGI could. If you want me to go into more detail with this, let me know).
The point isn’t that an AGI has or does not have certain skills. It’s that it has the ability to learn those skills. Deep Blue doesn’t have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.
I disagree.
Suppose an organization has developers who work in-house on their issue tracking system (there are several that do—mostly software companies).
An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).
Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.
I think that in this case, an organization’s self-improvement to the issue tracker source code recursively ‘fuels’ other improvements to the organization’s cognition.
Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?
Then they hire a skilled flute-player, right?
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw
I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.
Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly.
So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.
It’s like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/
It’s highly unlikely a company will be able to get >1.
To me, that just sounds like confusion about the relationship between genetic and psychological evolution.
Um > 1 what. It’s easy to make irrefutable predictions when what you say is vague and meaningless.
The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight.
What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)
So: that’s sounds like what is commonly called “exponential growth”.
Some companies do exhibit exponential economic growth. Indeed the whole economy exhibits exponential growth—a few percent a year—as is well known. I don’t think you have thought your alleged corporate “shrinking” effect through.
They can also replace the humans with machines, one human/task at a time. The process is called “automation”.