I cannot think of any route to recursive self-improvement for an organization that does not go through an AI. A priori, it’s conceivable that there is such a route and I just haven’t thought of it, but on the other hand, the corporate singularity hasn’t happened, which suggests that it is extremely difficult to make happen with the resources available to corporations today.
I find this confusing, since in my understanding and experience, many organizations undergo recursive self-improvement lots of the time.
Could you elaborate your thinking on this? Why is an organization’s intervention into, say, the organizational structure of its own management not effectively recursively self-improving on applied organization theory?
One could argue that the expansion of global capitalism constitutes a ‘corporate singularity’.
Sorry, my comment was misphrased. Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic. I should have said “self-improvement to a level that exceeds its starting point by an extremely large margin”, not “recursive self-improvement”.
I think we agree that organizations recursively self-improve.
The remaining question is whether organizational cognitive enhancement is bounded significantly below that of an AI.
So far, most of the arguments I’ve encountered for why the bound on machine intelligence is much higher than human intelligence have to do with the physical differences between hardware and wetware).
I don’t disagree with those arguments. What I’ve been trying to argue is that the cognitive processes of an organization are based on both hardware and wetware substrates. So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.
I guess I’d add here that wetware has some nice computational properties as well. It’s possible that the ideal cognitive structure would efficiently use both hardware and wetware.
So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.
Ah, so you’re concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That’s conceivable, but there are a few reasons I’m not too concerned about it.
Organizations are made mostly out of humans, and most of their agency goes through human agency, so there’s a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
That’s conceivable, but there are a few reasons I’m not too concerned about it.
Organizations are made mostly out of humans
Is Google “made mostly out of humans”? What about its huge datacenters? They are where a lot of the real work gets done—right?
It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
So, I’m not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?
If so, that’s cool, but why should rational thinkers share your lack of concern?
Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic.
There will always be some finite upper bound on the extent to which existing agents will have been able to improve so far.
Google has managed to improve quite a bit since the chimpanzee-like era, and it hasn’t stopped yet. Evidently the “upper bound” is a long, long way above the starting point—and not very “catastrophic”.
True. My point was that if it was easy for an organization to become much more powerful than it is now, and the organization was motivated to do so, then it would already be much more powerful than it is now, so we should not expect a sudden increase in organizations’ self-improvement abilities unless we can identify a good reason that it is particularly likely. The increased ease of self-modification offered by being completely digital is such a reason, but since organizations are not completely digital, this does not offer a way for organizations to suddenly increase their rate of self-improvement unless we can upload an organization.
We don’t expect a sudden increase in organizations’ self-improvement abilities.We don’t expect a sudden increase in the self-improvement abilities of machines either. The bottom line is that evolution happens gradually. Going digital isn’t a reason to expect a sudden increase in self-improvement abilities. We know that since the digital revolution has been going on for decades now, and the resulting rate of improvement is clearly gradual. It is gradual because digitization affects one system at a time, and there are many systems involved, each of which is instantiated many times—and their replacement takes time. So, for example, the human memory system has already been superseded in practically every way by machine memories. The human retina has already been superseded in practically every way by digital cameras. Humans won’t suddenly be replaced by machines. They will coevolve for an extended period—indeed they have already been doing that for thousands of years now.
We don’t expect a sudden increase in the self-improvement abilities of machines either.
Maybe you don’t expect that, but surely you must be aware that many of us do.
Anyway, nothing seems particularly close to powerful enough to be catastrophically dangerous at the moment except for nuclear-armed nations, which have been fairly stable in their power and, with the exception of North Korea, which isn’t powerful enough, the rest of the nuclear powers are not much of a threat because they would prefer not to cause massive destruction. Every organization that’s not a country is far enough away from that level of power that I don’t expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.
I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM”.
We are witness to Moore’s law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don’t have an argument with that. What I would object to are saltations. Those are suggested by the term “suddenly”—but are contrary to evolutionary theory.
Probably, things will be progressing fastest well after the human era is over. It’s a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then.
Every organization that’s not a country is far enough away from that level of power that I don’t expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.
So: giant oaks from tiny acorns grow—and it is easiest to influence creatures when they are young.
I cannot think of any route to recursive self-improvement for an organization that does not go through an AI. A priori, it’s conceivable that there is such a route and I just haven’t thought of it, but on the other hand, the corporate singularity hasn’t happened, which suggests that it is extremely difficult to make happen with the resources available to corporations today.
I find this confusing, since in my understanding and experience, many organizations undergo recursive self-improvement lots of the time.
Could you elaborate your thinking on this? Why is an organization’s intervention into, say, the organizational structure of its own management not effectively recursively self-improving on applied organization theory?
One could argue that the expansion of global capitalism constitutes a ‘corporate singularity’.
Sorry, my comment was misphrased. Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic. I should have said “self-improvement to a level that exceeds its starting point by an extremely large margin”, not “recursive self-improvement”.
Ok, thanks for explaining that.
I think we agree that organizations recursively self-improve.
The remaining question is whether organizational cognitive enhancement is bounded significantly below that of an AI.
So far, most of the arguments I’ve encountered for why the bound on machine intelligence is much higher than human intelligence have to do with the physical differences between hardware and wetware).
I don’t disagree with those arguments. What I’ve been trying to argue is that the cognitive processes of an organization are based on both hardware and wetware substrates. So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.
I guess I’d add here that wetware has some nice computational properties as well. It’s possible that the ideal cognitive structure would efficiently use both hardware and wetware.
Ah, so you’re concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That’s conceivable, but there are a few reasons I’m not too concerned about it.
Organizations are made mostly out of humans, and most of their agency goes through human agency, so there’s a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
Is Google “made mostly out of humans”? What about its huge datacenters? They are where a lot of the real work gets done—right?
So, I’m not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?
If so, that’s cool, but why should rational thinkers share your lack of concern?
Google’s datacenters don’t have much agency. Their humans do.
No, it makes them idealistic about it.
There will always be some finite upper bound on the extent to which existing agents will have been able to improve so far.
Google has managed to improve quite a bit since the chimpanzee-like era, and it hasn’t stopped yet. Evidently the “upper bound” is a long, long way above the starting point—and not very “catastrophic”.
True. My point was that if it was easy for an organization to become much more powerful than it is now, and the organization was motivated to do so, then it would already be much more powerful than it is now, so we should not expect a sudden increase in organizations’ self-improvement abilities unless we can identify a good reason that it is particularly likely. The increased ease of self-modification offered by being completely digital is such a reason, but since organizations are not completely digital, this does not offer a way for organizations to suddenly increase their rate of self-improvement unless we can upload an organization.
We don’t expect a sudden increase in organizations’ self-improvement abilities.We don’t expect a sudden increase in the self-improvement abilities of machines either. The bottom line is that evolution happens gradually. Going digital isn’t a reason to expect a sudden increase in self-improvement abilities. We know that since the digital revolution has been going on for decades now, and the resulting rate of improvement is clearly gradual. It is gradual because digitization affects one system at a time, and there are many systems involved, each of which is instantiated many times—and their replacement takes time. So, for example, the human memory system has already been superseded in practically every way by machine memories. The human retina has already been superseded in practically every way by digital cameras. Humans won’t suddenly be replaced by machines. They will coevolve for an extended period—indeed they have already been doing that for thousands of years now.
Maybe you don’t expect that, but surely you must be aware that many of us do.
Anyway, nothing seems particularly close to powerful enough to be catastrophically dangerous at the moment except for nuclear-armed nations, which have been fairly stable in their power and, with the exception of North Korea, which isn’t powerful enough, the rest of the nuclear powers are not much of a threat because they would prefer not to cause massive destruction. Every organization that’s not a country is far enough away from that level of power that I don’t expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.
I am aware that there’s an argument that at some point things will be changing rapidly:
We are witness to Moore’s law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don’t have an argument with that. What I would object to are saltations. Those are suggested by the term “suddenly”—but are contrary to evolutionary theory.
Probably, things will be progressing fastest well after the human era is over. It’s a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then.
So: giant oaks from tiny acorns grow—and it is easiest to influence creatures when they are young.