As for “Wright brothers” situation, it’s not so obvious. We have AI methods which work but don’t scale well (theorem provers, semantic nets, expert systems. Not a method, but nevertheless worth mentioning: SHRDLU), we have well scaling methods, which lack generalization power (statistical methods, neural nets, SVMs, deep belief networks, etc.), and yet we don’t know how to put it all together.
It looks like we are going to “Wright stage”, where one will have all equipment to put together and make working prototype.
we have well scaling methods, which lack generalization power (statistical methods, neural nets, SVMs, deep belief networks
You got it backwards. These methods have generalization power, especially the SVM (achieving generalization is the whole point of the VC theory on which it’s based), but don’t scale well.
Yes, bad wording on my side. I mean something like capability of representing and operating on complex objects, situations and relations. However it doesn’t invalidate my (quite trivial) point that we don’t have practical theory of AGI yet.
The race participants are the ones with things to conceal, mostly. One could try and incentivise them to reveal things by using something like the patent system—but since machine intelligence is likely to start out as a server-side technology, patents seem likely to be irrelevant—you can just use trade secrets instead, since those have better security, don’t need lawyers to enforce and have no expiration date. I discuss code-hiding issues here:
I figure that we are well past the “Wright brothers” stage—in the sense that huge corporations are already involved in exploiting machine intelligence technology—and large sums of money are already being made out of it.
machine intelligence is likely to start out as a server-side technology
I don’t understand. The difference between server-side and client side is how you use it. It’s just going to be “really powerful technology” and from there it will be ‘server’, ‘client’, a combination of the two, a standalone system or something that does not reasonably fit that category (like Summer Glau).
Server side has enormous computer farms. Client side is mostly desktop and mobile devices—where there is vastly less power, storage and bandwidth available.
The server is like the queen bee—or with the analogy of multicellularity, the server is like the brain of the whole system.
The overwhelming majority of servers actually require less computing power than the average desktop. Many powerful computer farms don’t particularly fit in the category of ‘server’, in particular it isn’t useful to describe large data warehousing and datamining systems using a ‘client-server’ model. That would just be a pointless distraction.
Client side is mostly desktop and mobile devices
I agree that the first machine intelligence is unlikely to be an iPhone app.
I have little sympathy for the idea that most powerful computer farms are not “servers”. It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to “clients”. See:
Right, but compare with the Google container data center tour.
I selected the word majority for a reason. I didn’t make a claim about the outliers and I don’t even make a claim about the ‘average power’.
I have little sympathy for the idea that most powerful computer farms are not “servers”. It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to “clients”.
That is a naive definition of ‘server’. “Something that you can access remotely and runs server software” is trivial enough that it adds nothing at all to our understanding of AIs to say it uses a server.
For comparison just last week I had a task requiring use of one of the servers I rent from some unknown server farm over the internet. The specific task involved automation of a process and required client side software (firefox, among other things). The software I installed and used was all the software that makes up a client. It also performed all the roles of a client. On the list I mentioned earlier that virtual machine is clearly “a combination of the two” and that fact is in no way a paradox. “Client” and “server” are just roles that a machine can take on and they are far from the most relevant descriptions of the machines that will run an early AI.
It’s the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don’t like using the term “server” in this context, feel free to substitute “large computer farm” instead—as follows:
“machine intelligence is likely to start out as a large computer farm technology”
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
If nothing else we seem to agree that neither small servers nor iPhones are the likely birthplace of AI. That definitely rules out servers that ARE iPhones!
“Large computer farm” and, for that matter “large server farm” has a whole different meaning to “server-side technology”. I’m going here from using both client and server side technology simultaneously for several automation tools that intrinsically need to take on both those roles simultaniously to seeing the term used to mean essentially ‘requires a whole bunch of computing hardware’. This jumps out to me as misleading.
I don’t think there is much doubt about the kind of hardware that the first machine intelligence is run on. But I would be surprised if I arrive at that conclusion for the same reasons that you do. I think it is highly improbable that the critical theoretical breakthroughs will arrive in a form that makes a mere order of magnitude or two difference in computing power the critical factor for success. But I do know from experience that when crafting AI algorithms the natural tendency is to expand to use all available computational resources.
Back in my postgrad days my professor got us a grant to develop some AI for factory scheduling using the VPAC supercomputer. I had a hell of a lot of fun implementing collaborative agent code. MPI2 with C++ bindings if I recall. But was it necessary? Not even remotely. I swear I could have written practically the same paper using an old 286 and half the run time. But while doing the research I used every clock cycle I could and chafed at the bit wishing I had more.
If someone gets the theoretical progress to make a worthwhile machine intelligence I have no doubt that they will throw every piece of computer hardware at it that they can afford!
“more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.”
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
I somewhat agree with what you are saying here. Where we disagree slightly, in a mater of degree and not fundamental structure, is on the relative importance of the hardware vs those other advantages. I suspect the funding, like-minded colleagues and particularly the etc are more important factors than the hardware.
Thanks, it’s interesting, despite I’m not very good at recognition of spoken english, I was unable to decipher robots part in particular.
Nevertheless I doubt that R&D division of single corporation can make all the work, which is nessesary for AGI launch, without open information from scientific community. Thus they can hide details of implementation, but they cannot hide ideas they based their work upon. Going back to Wright brothers, in 1910 there was already industry of internal combustion engines, and Henry Ford was already making money, and aerodynamics made some progress. All in all, I can’t see crucial difference.
The Ford Airplane Company
did get in on aeroplanes—but in the 1920s. In 1910 there was no aeroplane business.
For the inventors of machine intelligence, I figure you have to look back to people like Alan Turing. What we are seeing now is more like the ramping up of an existing industrial process. Creating very smart agents is better seen as being comparable to breaking the sound barrier.
And which information to conceal. Right?
As for “Wright brothers” situation, it’s not so obvious. We have AI methods which work but don’t scale well (theorem provers, semantic nets, expert systems. Not a method, but nevertheless worth mentioning: SHRDLU), we have well scaling methods, which lack generalization power (statistical methods, neural nets, SVMs, deep belief networks, etc.), and yet we don’t know how to put it all together.
It looks like we are going to “Wright stage”, where one will have all equipment to put together and make working prototype.
You got it backwards. These methods have generalization power, especially the SVM (achieving generalization is the whole point of the VC theory on which it’s based), but don’t scale well.
Yes, bad wording on my side. I mean something like capability of representing and operating on complex objects, situations and relations. However it doesn’t invalidate my (quite trivial) point that we don’t have practical theory of AGI yet.
The race participants are the ones with things to conceal, mostly. One could try and incentivise them to reveal things by using something like the patent system—but since machine intelligence is likely to start out as a server-side technology, patents seem likely to be irrelevant—you can just use trade secrets instead, since those have better security, don’t need lawyers to enforce and have no expiration date. I discuss code-hiding issues here:
“Tim Tyler: Closed source intelligent machines”
http://www.youtube.com/watch?v=Fn8Ly9QJF6s
I figure that we are well past the “Wright brothers” stage—in the sense that huge corporations are already involved in exploiting machine intelligence technology—and large sums of money are already being made out of it.
I don’t understand. The difference between server-side and client side is how you use it. It’s just going to be “really powerful technology” and from there it will be ‘server’, ‘client’, a combination of the two, a standalone system or something that does not reasonably fit that category (like Summer Glau).
Server side has enormous computer farms. Client side is mostly desktop and mobile devices—where there is vastly less power, storage and bandwidth available.
The server is like the queen bee—or with the analogy of multicellularity, the server is like the brain of the whole system.
The overwhelming majority of servers actually require less computing power than the average desktop. Many powerful computer farms don’t particularly fit in the category of ‘server’, in particular it isn’t useful to describe large data warehousing and datamining systems using a ‘client-server’ model. That would just be a pointless distraction.
I agree that the first machine intelligence is unlikely to be an iPhone app.
Right, but compare with the Google container data center tour.
I have little sympathy for the idea that most powerful computer farms are not “servers”. It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to “clients”. See:
http://en.wikipedia.org/wiki/Server_farm
I selected the word majority for a reason. I didn’t make a claim about the outliers and I don’t even make a claim about the ‘average power’.
That is a naive definition of ‘server’. “Something that you can access remotely and runs server software” is trivial enough that it adds nothing at all to our understanding of AIs to say it uses a server.
For comparison just last week I had a task requiring use of one of the servers I rent from some unknown server farm over the internet. The specific task involved automation of a process and required client side software (firefox, among other things). The software I installed and used was all the software that makes up a client. It also performed all the roles of a client. On the list I mentioned earlier that virtual machine is clearly “a combination of the two” and that fact is in no way a paradox. “Client” and “server” are just roles that a machine can take on and they are far from the most relevant descriptions of the machines that will run an early AI.
“Server” is a red herring.
It’s the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don’t like using the term “server” in this context, feel free to substitute “large computer farm” instead—as follows:
“machine intelligence is likely to start out as a large computer farm technology”
If nothing else we seem to agree that neither small servers nor iPhones are the likely birthplace of AI. That definitely rules out servers that ARE iPhones!
“Large computer farm” and, for that matter “large server farm” has a whole different meaning to “server-side technology”. I’m going here from using both client and server side technology simultaneously for several automation tools that intrinsically need to take on both those roles simultaniously to seeing the term used to mean essentially ‘requires a whole bunch of computing hardware’. This jumps out to me as misleading.
I don’t think there is much doubt about the kind of hardware that the first machine intelligence is run on. But I would be surprised if I arrive at that conclusion for the same reasons that you do. I think it is highly improbable that the critical theoretical breakthroughs will arrive in a form that makes a mere order of magnitude or two difference in computing power the critical factor for success. But I do know from experience that when crafting AI algorithms the natural tendency is to expand to use all available computational resources.
Back in my postgrad days my professor got us a grant to develop some AI for factory scheduling using the VPAC supercomputer. I had a hell of a lot of fun implementing collaborative agent code. MPI2 with C++ bindings if I recall. But was it necessary? Not even remotely. I swear I could have written practically the same paper using an old 286 and half the run time. But while doing the research I used every clock cycle I could and chafed at the bit wishing I had more.
If someone gets the theoretical progress to make a worthwhile machine intelligence I have no doubt that they will throw every piece of computer hardware at it that they can afford!
Computing power is fairly important:
“more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.”
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
I somewhat agree with what you are saying here. Where we disagree slightly, in a mater of degree and not fundamental structure, is on the relative importance of the hardware vs those other advantages. I suspect the funding, like-minded colleagues and particularly the etc are more important factors than the hardware.
Thanks, it’s interesting, despite I’m not very good at recognition of spoken english, I was unable to decipher robots part in particular.
Nevertheless I doubt that R&D division of single corporation can make all the work, which is nessesary for AGI launch, without open information from scientific community. Thus they can hide details of implementation, but they cannot hide ideas they based their work upon. Going back to Wright brothers, in 1910 there was already industry of internal combustion engines, and Henry Ford was already making money, and aerodynamics made some progress. All in all, I can’t see crucial difference.
The Ford Airplane Company did get in on aeroplanes—but in the 1920s. In 1910 there was no aeroplane business.
For the inventors of machine intelligence, I figure you have to look back to people like Alan Turing. What we are seeing now is more like the ramping up of an existing industrial process. Creating very smart agents is better seen as being comparable to breaking the sound barrier.