machine intelligence is likely to start out as a server-side technology
I don’t understand. The difference between server-side and client side is how you use it. It’s just going to be “really powerful technology” and from there it will be ‘server’, ‘client’, a combination of the two, a standalone system or something that does not reasonably fit that category (like Summer Glau).
Server side has enormous computer farms. Client side is mostly desktop and mobile devices—where there is vastly less power, storage and bandwidth available.
The server is like the queen bee—or with the analogy of multicellularity, the server is like the brain of the whole system.
The overwhelming majority of servers actually require less computing power than the average desktop. Many powerful computer farms don’t particularly fit in the category of ‘server’, in particular it isn’t useful to describe large data warehousing and datamining systems using a ‘client-server’ model. That would just be a pointless distraction.
Client side is mostly desktop and mobile devices
I agree that the first machine intelligence is unlikely to be an iPhone app.
I have little sympathy for the idea that most powerful computer farms are not “servers”. It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to “clients”. See:
Right, but compare with the Google container data center tour.
I selected the word majority for a reason. I didn’t make a claim about the outliers and I don’t even make a claim about the ‘average power’.
I have little sympathy for the idea that most powerful computer farms are not “servers”. It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to “clients”.
That is a naive definition of ‘server’. “Something that you can access remotely and runs server software” is trivial enough that it adds nothing at all to our understanding of AIs to say it uses a server.
For comparison just last week I had a task requiring use of one of the servers I rent from some unknown server farm over the internet. The specific task involved automation of a process and required client side software (firefox, among other things). The software I installed and used was all the software that makes up a client. It also performed all the roles of a client. On the list I mentioned earlier that virtual machine is clearly “a combination of the two” and that fact is in no way a paradox. “Client” and “server” are just roles that a machine can take on and they are far from the most relevant descriptions of the machines that will run an early AI.
It’s the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don’t like using the term “server” in this context, feel free to substitute “large computer farm” instead—as follows:
“machine intelligence is likely to start out as a large computer farm technology”
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
If nothing else we seem to agree that neither small servers nor iPhones are the likely birthplace of AI. That definitely rules out servers that ARE iPhones!
“Large computer farm” and, for that matter “large server farm” has a whole different meaning to “server-side technology”. I’m going here from using both client and server side technology simultaneously for several automation tools that intrinsically need to take on both those roles simultaniously to seeing the term used to mean essentially ‘requires a whole bunch of computing hardware’. This jumps out to me as misleading.
I don’t think there is much doubt about the kind of hardware that the first machine intelligence is run on. But I would be surprised if I arrive at that conclusion for the same reasons that you do. I think it is highly improbable that the critical theoretical breakthroughs will arrive in a form that makes a mere order of magnitude or two difference in computing power the critical factor for success. But I do know from experience that when crafting AI algorithms the natural tendency is to expand to use all available computational resources.
Back in my postgrad days my professor got us a grant to develop some AI for factory scheduling using the VPAC supercomputer. I had a hell of a lot of fun implementing collaborative agent code. MPI2 with C++ bindings if I recall. But was it necessary? Not even remotely. I swear I could have written practically the same paper using an old 286 and half the run time. But while doing the research I used every clock cycle I could and chafed at the bit wishing I had more.
If someone gets the theoretical progress to make a worthwhile machine intelligence I have no doubt that they will throw every piece of computer hardware at it that they can afford!
“more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.”
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
I somewhat agree with what you are saying here. Where we disagree slightly, in a mater of degree and not fundamental structure, is on the relative importance of the hardware vs those other advantages. I suspect the funding, like-minded colleagues and particularly the etc are more important factors than the hardware.
I don’t understand. The difference between server-side and client side is how you use it. It’s just going to be “really powerful technology” and from there it will be ‘server’, ‘client’, a combination of the two, a standalone system or something that does not reasonably fit that category (like Summer Glau).
Server side has enormous computer farms. Client side is mostly desktop and mobile devices—where there is vastly less power, storage and bandwidth available.
The server is like the queen bee—or with the analogy of multicellularity, the server is like the brain of the whole system.
The overwhelming majority of servers actually require less computing power than the average desktop. Many powerful computer farms don’t particularly fit in the category of ‘server’, in particular it isn’t useful to describe large data warehousing and datamining systems using a ‘client-server’ model. That would just be a pointless distraction.
I agree that the first machine intelligence is unlikely to be an iPhone app.
Right, but compare with the Google container data center tour.
I have little sympathy for the idea that most powerful computer farms are not “servers”. It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to “clients”. See:
http://en.wikipedia.org/wiki/Server_farm
I selected the word majority for a reason. I didn’t make a claim about the outliers and I don’t even make a claim about the ‘average power’.
That is a naive definition of ‘server’. “Something that you can access remotely and runs server software” is trivial enough that it adds nothing at all to our understanding of AIs to say it uses a server.
For comparison just last week I had a task requiring use of one of the servers I rent from some unknown server farm over the internet. The specific task involved automation of a process and required client side software (firefox, among other things). The software I installed and used was all the software that makes up a client. It also performed all the roles of a client. On the list I mentioned earlier that virtual machine is clearly “a combination of the two” and that fact is in no way a paradox. “Client” and “server” are just roles that a machine can take on and they are far from the most relevant descriptions of the machines that will run an early AI.
“Server” is a red herring.
It’s the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous—but they are irrelevant to this point—there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don’t like using the term “server” in this context, feel free to substitute “large computer farm” instead—as follows:
“machine intelligence is likely to start out as a large computer farm technology”
If nothing else we seem to agree that neither small servers nor iPhones are the likely birthplace of AI. That definitely rules out servers that ARE iPhones!
“Large computer farm” and, for that matter “large server farm” has a whole different meaning to “server-side technology”. I’m going here from using both client and server side technology simultaneously for several automation tools that intrinsically need to take on both those roles simultaniously to seeing the term used to mean essentially ‘requires a whole bunch of computing hardware’. This jumps out to me as misleading.
I don’t think there is much doubt about the kind of hardware that the first machine intelligence is run on. But I would be surprised if I arrive at that conclusion for the same reasons that you do. I think it is highly improbable that the critical theoretical breakthroughs will arrive in a form that makes a mere order of magnitude or two difference in computing power the critical factor for success. But I do know from experience that when crafting AI algorithms the natural tendency is to expand to use all available computational resources.
Back in my postgrad days my professor got us a grant to develop some AI for factory scheduling using the VPAC supercomputer. I had a hell of a lot of fun implementing collaborative agent code. MPI2 with C++ bindings if I recall. But was it necessary? Not even remotely. I swear I could have written practically the same paper using an old 286 and half the run time. But while doing the research I used every clock cycle I could and chafed at the bit wishing I had more.
If someone gets the theoretical progress to make a worthwhile machine intelligence I have no doubt that they will throw every piece of computer hardware at it that they can afford!
Computing power is fairly important:
“more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.”
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
Those with a server farm have maybe 5-10 years hardware advantage over the rest of us—and they probably have other advantages as well: better funding, like-minded colleagues, etc.
I somewhat agree with what you are saying here. Where we disagree slightly, in a mater of degree and not fundamental structure, is on the relative importance of the hardware vs those other advantages. I suspect the funding, like-minded colleagues and particularly the etc are more important factors than the hardware.