I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor.
We had access to the source code of the distro—and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other.
We managed to get the product out of the door, but every single code change—even, at times, changes to non-functional lines of code like comments—would cause major problems (mention the phrase “Java GUI” to me even now, a couple of years later, and I’ll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all.
Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip.
Given my experience of doing even that, I’d say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.
Yes, emulation can be hard. But even so, writing software with the full power of the human brain from scratch seems much harder. If you agree, then you should still expect emulations to be the first AI to arrive.
I disagree. In general I think that once the principles involved are fully understood, writing from scratch a program that performs the same generic tasks as the human brain would be easier than emulating a specific human brain.
In fact I suspect that the code for an AI itself, if one is ever created, will be remarkably compact—possibly the kind of thing that could be knocked up in a few lines of Perl once someone has the correct insights into the remaining problems. AIXI, for example, would be a trivially short program to write, if one had the computing power necessary to make it workable (which is not going to happen, obviously).
My view (and it is mostly a hunch) is that implementing generic intelligence will be a much, much easier task than implementing a copy of a specific intelligence that runs on different hardware, in much the same way that if you’re writing a computer racing game it’s much easier to create an implementation of a car that has only the properties needed for the game than it would be to emulate an entire existing car down to the level of the emissions coming out of the exhaust pipe and a model of the screwed up McDonald’s wrapper under the seat. The latter would be ‘easy’ in the sense of just copying what was there rather than creating something from basic principles, but I doubt it’s something that would be easier to do in practice.
Building emulators is hard. But I think it isn’t quite so hard as that, these days. Apple has now done it twice, and been able to run a really quite large subset of Mac software after each transition. Virtual machines are reasonably straightforward engineering at this point. Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine—and they’re quite robust these days with very small performance penalties. All these are certainly very large software engineering projects—but they’re routine engineering, not megaprojects, at this stage.
Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. Probably small changes in the physics model correspond to small changes in temperature, chemical content, etc. And an emulation that’s as good as a slightly feverish and drunk person would still be impressive and even useful.
No they didn’t. At least one of those times was actually the software I described above, bought from the company I worked for. So I know exactly how hard it was to create.
“Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine”—which the engineers themselves get to specify, design and implement,
“Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. ”
I would love to live in a world where re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a ‘minor detail of underlying platform’. I live i this one, however.
re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a ‘minor detail of underlying platform’. I live i this one, however.
I had assumed we were talking about low-level emulation: the program explicitly models each neuron, and probably at a lower level than that. And physical simulation is a well understood problem and my impression is that the chemists are pretty good at it.
Trying to do some clever white-box reimplementation of the algorithm I agree is probably intractable or worse. The emulation will be very far from the optimal implementation of the mind-program in question.
I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor.
We had access to the source code of the distro—and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other.
We managed to get the product out of the door, but every single code change—even, at times, changes to non-functional lines of code like comments—would cause major problems (mention the phrase “Java GUI” to me even now, a couple of years later, and I’ll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all.
Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip.
Given my experience of doing even that, I’d say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.
Yes, emulation can be hard. But even so, writing software with the full power of the human brain from scratch seems much harder. If you agree, then you should still expect emulations to be the first AI to arrive.
I disagree. In general I think that once the principles involved are fully understood, writing from scratch a program that performs the same generic tasks as the human brain would be easier than emulating a specific human brain.
In fact I suspect that the code for an AI itself, if one is ever created, will be remarkably compact—possibly the kind of thing that could be knocked up in a few lines of Perl once someone has the correct insights into the remaining problems. AIXI, for example, would be a trivially short program to write, if one had the computing power necessary to make it workable (which is not going to happen, obviously).
My view (and it is mostly a hunch) is that implementing generic intelligence will be a much, much easier task than implementing a copy of a specific intelligence that runs on different hardware, in much the same way that if you’re writing a computer racing game it’s much easier to create an implementation of a car that has only the properties needed for the game than it would be to emulate an entire existing car down to the level of the emissions coming out of the exhaust pipe and a model of the screwed up McDonald’s wrapper under the seat. The latter would be ‘easy’ in the sense of just copying what was there rather than creating something from basic principles, but I doubt it’s something that would be easier to do in practice.
Building emulators is hard. But I think it isn’t quite so hard as that, these days. Apple has now done it twice, and been able to run a really quite large subset of Mac software after each transition. Virtual machines are reasonably straightforward engineering at this point. Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine—and they’re quite robust these days with very small performance penalties. All these are certainly very large software engineering projects—but they’re routine engineering, not megaprojects, at this stage.
Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. Probably small changes in the physics model correspond to small changes in temperature, chemical content, etc. And an emulation that’s as good as a slightly feverish and drunk person would still be impressive and even useful.
″ Apple has now done it twice,”
No they didn’t. At least one of those times was actually the software I described above, bought from the company I worked for. So I know exactly how hard it was to create.
“Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine”—which the engineers themselves get to specify, design and implement,
“Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. ” I would love to live in a world where re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a ‘minor detail of underlying platform’. I live i this one, however.
I had assumed we were talking about low-level emulation: the program explicitly models each neuron, and probably at a lower level than that. And physical simulation is a well understood problem and my impression is that the chemists are pretty good at it.
Trying to do some clever white-box reimplementation of the algorithm I agree is probably intractable or worse. The emulation will be very far from the optimal implementation of the mind-program in question.