But the bottleneck on organizational intelligence is either human intelligence or machine intelligence.
I disagree. I think there are lots of gains to intelligence that can happen at the point of human-computer interaction, or in the facilitation of human intelligence by machine intelligence, or vice versa.
For example, collaborative filtering technology. Or, internet message boards.
If we’re very lucky, those computers will directly inherit the corporation’s purported goal structure (“to enhance shareholder value”). Not that shareholder value is a good goal—just that it’s much less bad than a lot of the alternatives.
I’m curious why you think that an aritifical intelligence system built by Google would by likely to not meet the corporations goal structure (or some sub-goal).
In practice, AI programming tends to be about building expert systems for particular functions. It’s difficult (and expensive) just to do that. So, building up an intelligent system that just goes crazy and kills people doesn’t seem to be in, say, Google’s interest.
That said, I’d be curious to follow the thread of whether maximizing shareholder value is a ‘friendly’ or ‘mean’ goal structure. Since that seems to be one of the predominant goal structures that it’s likely for a superintelligence to have, it seems like that would be of particular interest. (Another one might be “win elections”, since political parties are increasingly using machine intelligence to augment their performance.)
I disagree. I think there are lots of gains to intelligence that can happen at the point of human-computer interaction, or in the facilitation of human intelligence by machine intelligence, or vice versa.
For example, collaborative filtering technology. Or, internet message boards.
There are some gains, sure, but not lots and not, so far, recursive gains.
I’m curious why you think that an aritifical intelligence system built by Google would by likely to not meet the corporations goal structure (or some sub-goal).
I think that many AI systems presently built by Google do meet the corporation’s sub-goals (or, to be more precise, sub-goals of parts of the organization, which might not be the same as the corporation as a whole). The only case I’m worried about is a self-modifying AI. Presently, there aren’t any of those. Ensuring that goals are stable under self-modification is the hard problem that SIAI is worried about.
In practice, AI programming tends to be about building expert systems for particular functions. It’s difficult (and expensive) just to do that. So, building up an intelligent system that just goes crazy and kills people doesn’t seem to be in, say, Google’s interest.
There’s been a lot of discussion around here on “Tool AI”; here’s one.
That said, I’d be curious to follow the thread of whether maximizing shareholder value is a ‘friendly’ or ‘mean’ goal structure. Since that seems to be one of the predominant goal structures that it’s likely for a superintelligence to have, it seems like that would be of particular interest. (Another one might be “win elections”, since political parties are increasingly using machine intelligence to augment their performance.)
On one hand, public corporations have certainly created plenty of prosperity over the past few hundred years, while (in theory) aiming mostly to maximize shareholder value.
But if value is denominated in dollar terms, one way to maximize shareholder value would be hyperinflation. That would be extremely bad for everyone. But even if we exclude that problem, most shareholders value something other than just dollars—the natural environment, for instance. And yet those preferences might not be captured by an AI’s goal system (especially a non-Google system; Google doesn’t seem to mind creating positive externalities but most other tech companies try to avoid it).
It still probably beats being turned into paperclips, but I would hope for better.
There are some gains, sure, but not lots and not, so far, recursive gains.
What about the organizations that focus on tools that support software development. The Git community, for example.
Is there a resource you can direct me to that clarifies what you mean by recursive gains or self-modifying AI? If I’m not mistaken these terms are not used in the resources I’ve been reading about this. But if I’m guessing the meaning of the terms right, it seems to me that organizations self-modify all the time.
There are some gains, sure, but not lots and not, so far, recursive gains.
What about the organizations that focus on tools that support software development. The Git community, for example.
Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By “in the loop”, I mean humans are modifying Git, while Git is not modifying humans or itself.
Is there a resource you can direct me to that clarifies what you mean by recursive gains or self-modifying AI? If I’m not mistaken these terms are not used in the resources I’ve been reading about this. But if I’m guessing the meaning of the terms right, it seems to me that organizations self-modify all the time.
Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By “in the loop”, I mean humans are modifying Git, while Git is not modifying humans or itself.
Second, the level of abstraction I’m talking about is that of the total organization. So, does the organization modify its human components, as it modifies its software component?
I’d say: yes. Suppose Git adds a new feature. Then the human components need to communicate with each other about that new feature, train themselves on it. Somebody in the community needs to self-modify to maintain mastery of that piece of the code base.
More generally, humans within organizations self-modify using communication and training.
At this very moment, by participating in the LessWrong organization focused around this bulletin board, I am participating in an organizational self-modification of LessWrong’s human components.
The bottleneck that’s been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can’t directly change its hardware through recursive self-modification, I don’t see how that bottleneck puts AGI at an immediate, FOOMy advantage.
This seems to be quite similar to Robin Hanson’s Ubertool argument.
More generally, humans within organizations self-modify using communication and training.
The bottleneck that’s been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can’t directly change its hardware through recursive self-modification, I don’t see how that bottleneck puts AGI at an immediate, FOOMy advantage.
The problems with wetware are not that it’s hard to change the hardware—it’s that there is very little that seems to be implemented in modifiable software. We can’t change the algorithm our eyes use to assemble images (this might be useful to avoid autocrorecting typos). We can’t save the stack when an interrupt comes in. We can’t easily process slower in exchange for more working memory.
We have have limits in how much we can self-monitor. Consider writing PHP code which manually generates SQL statements. It would be nice if we could remember to always escape our inputs to avoid SQL injection attacks. And a computer program could self-modify to do so. A human could try, but it is inevitable that they would on occasion forget (see Wordpress’s history of security holes).
We can’t trivially copy our skills—if you need two humans who can understand a codebase, it takes approximately twice as long as it takes for one. If you want some help on a project, you end up spending a ton of time explaining the problem to the next person. You can’t just transfer your state over.
None of these things are “software”, in the sense of being modifiable. And they’re all things that would let self-improvement happen more quickly, and that a computer could change.
I should also mention that an AI with a FPGA could change its hardware. But I think this is a minor point; the flexibility of software is simply vastly higher than the flexibility of brains.
Most software companies plan to automate as much of their work as reasonably possible. So: it isn’t clear what you mean.
Are you saying that most software companies have code which modifies code (no, CPP, M4, and Spring don’t count), or code which modifies humans? Because that has not been my experience in the software industry.
Most software companies plan to automate as much of their work as reasonably possible. So: it isn’t clear what you mean.
Are you saying that most software companies have code which modifies code [...]
Examples of automation in the software industry are refactoring, compilation and unit testing. The entire industry involves getting machines to do things—so humans don’t have to.
I disagree. I think there are lots of gains to intelligence that can happen at the point of human-computer interaction, or in the facilitation of human intelligence by machine intelligence, or vice versa.
For example, collaborative filtering technology. Or, internet message boards.
I’m curious why you think that an aritifical intelligence system built by Google would by likely to not meet the corporations goal structure (or some sub-goal).
In practice, AI programming tends to be about building expert systems for particular functions. It’s difficult (and expensive) just to do that. So, building up an intelligent system that just goes crazy and kills people doesn’t seem to be in, say, Google’s interest.
That said, I’d be curious to follow the thread of whether maximizing shareholder value is a ‘friendly’ or ‘mean’ goal structure. Since that seems to be one of the predominant goal structures that it’s likely for a superintelligence to have, it seems like that would be of particular interest. (Another one might be “win elections”, since political parties are increasingly using machine intelligence to augment their performance.)
There are some gains, sure, but not lots and not, so far, recursive gains.
I think that many AI systems presently built by Google do meet the corporation’s sub-goals (or, to be more precise, sub-goals of parts of the organization, which might not be the same as the corporation as a whole). The only case I’m worried about is a self-modifying AI. Presently, there aren’t any of those. Ensuring that goals are stable under self-modification is the hard problem that SIAI is worried about.
There’s been a lot of discussion around here on “Tool AI”; here’s one.
On one hand, public corporations have certainly created plenty of prosperity over the past few hundred years, while (in theory) aiming mostly to maximize shareholder value.
But if value is denominated in dollar terms, one way to maximize shareholder value would be hyperinflation. That would be extremely bad for everyone. But even if we exclude that problem, most shareholders value something other than just dollars—the natural environment, for instance. And yet those preferences might not be captured by an AI’s goal system (especially a non-Google system; Google doesn’t seem to mind creating positive externalities but most other tech companies try to avoid it).
It still probably beats being turned into paperclips, but I would hope for better.
What about the organizations that focus on tools that support software development. The Git community, for example.
Is there a resource you can direct me to that clarifies what you mean by recursive gains or self-modifying AI? If I’m not mistaken these terms are not used in the resources I’ve been reading about this. But if I’m guessing the meaning of the terms right, it seems to me that organizations self-modify all the time.
Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By “in the loop”, I mean humans are modifying Git, while Git is not modifying humans or itself.
Yes, but unfortunately it’s long-winded—specifically this article about something similar to the Git community.
I think I see what you mean, but I disagree.
First, I think timtyler makes a great point.
Second, the level of abstraction I’m talking about is that of the total organization. So, does the organization modify its human components, as it modifies its software component?
I’d say: yes. Suppose Git adds a new feature. Then the human components need to communicate with each other about that new feature, train themselves on it. Somebody in the community needs to self-modify to maintain mastery of that piece of the code base.
More generally, humans within organizations self-modify using communication and training.
At this very moment, by participating in the LessWrong organization focused around this bulletin board, I am participating in an organizational self-modification of LessWrong’s human components.
The bottleneck that’s been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can’t directly change its hardware through recursive self-modification, I don’t see how that bottleneck puts AGI at an immediate, FOOMy advantage.
This seems to be quite similar to Robin Hanson’s Ubertool argument.
The problems with wetware are not that it’s hard to change the hardware—it’s that there is very little that seems to be implemented in modifiable software. We can’t change the algorithm our eyes use to assemble images (this might be useful to avoid autocrorecting typos). We can’t save the stack when an interrupt comes in. We can’t easily process slower in exchange for more working memory.
We have have limits in how much we can self-monitor. Consider writing PHP code which manually generates SQL statements. It would be nice if we could remember to always escape our inputs to avoid SQL injection attacks. And a computer program could self-modify to do so. A human could try, but it is inevitable that they would on occasion forget (see Wordpress’s history of security holes).
We can’t trivially copy our skills—if you need two humans who can understand a codebase, it takes approximately twice as long as it takes for one. If you want some help on a project, you end up spending a ton of time explaining the problem to the next person. You can’t just transfer your state over.
None of these things are “software”, in the sense of being modifiable. And they’re all things that would let self-improvement happen more quickly, and that a computer could change.
I should also mention that an AI with a FPGA could change its hardware. But I think this is a minor point; the flexibility of software is simply vastly higher than the flexibility of brains.
Most software companies plan to automate as much of their work as reasonably possible. So: it isn’t clear what you mean.
Are you saying that most software companies have code which modifies code (no, CPP, M4, and Spring don’t count), or code which modifies humans? Because that has not been my experience in the software industry.
Examples of automation in the software industry are refactoring, compilation and unit testing. The entire industry involves getting machines to do things—so humans don’t have to.
Automation is not the same as recursive self-modification. There’s no loop.
The context is GIT improving GIT—where “GIT” refers to all the humans and machines involved in making GIT.
So: there’s your loop, right there.