I’ve never been able to figure out what sort of work ems would do once everything available has been turned into computronium. A few of them would do maintenance on the physical substrate, but all I can imagine for the rest is finding ways to steal computational resources from each other.
Most of those other people are doing useful tasks, without which people wouldn’t get nearly as much of what they want. If you don’t understand our current economy, you don’t have much of a prayer of understanding future ones.
I didn’t say the rest weren’t doing useful tasks. On the contrary, I meant to imply that if only a fraction of the workforce works on providing subsistence directly and obviously, it doesn’t mean that the rest are useless rent-seekers.
(That said, I probably do have a more pessimistic view than you about the amount of rent-seeking and makework that takes place presently.)
Quite. I also don’t think emulation is going to come anything like as quickly as most people here seem to think. I’ll start to think that maybe emulation might happen in the next couple of centuries the day I see a version of WINE that doesn’t have half the programs that one might want to run in it crashing...
I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor.
We had access to the source code of the distro—and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other.
We managed to get the product out of the door, but every single code change—even, at times, changes to non-functional lines of code like comments—would cause major problems (mention the phrase “Java GUI” to me even now, a couple of years later, and I’ll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all.
Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip.
Given my experience of doing even that, I’d say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.
Yes, emulation can be hard. But even so, writing software with the full power of the human brain from scratch seems much harder. If you agree, then you should still expect emulations to be the first AI to arrive.
I disagree. In general I think that once the principles involved are fully understood, writing from scratch a program that performs the same generic tasks as the human brain would be easier than emulating a specific human brain.
In fact I suspect that the code for an AI itself, if one is ever created, will be remarkably compact—possibly the kind of thing that could be knocked up in a few lines of Perl once someone has the correct insights into the remaining problems. AIXI, for example, would be a trivially short program to write, if one had the computing power necessary to make it workable (which is not going to happen, obviously).
My view (and it is mostly a hunch) is that implementing generic intelligence will be a much, much easier task than implementing a copy of a specific intelligence that runs on different hardware, in much the same way that if you’re writing a computer racing game it’s much easier to create an implementation of a car that has only the properties needed for the game than it would be to emulate an entire existing car down to the level of the emissions coming out of the exhaust pipe and a model of the screwed up McDonald’s wrapper under the seat. The latter would be ‘easy’ in the sense of just copying what was there rather than creating something from basic principles, but I doubt it’s something that would be easier to do in practice.
Building emulators is hard. But I think it isn’t quite so hard as that, these days. Apple has now done it twice, and been able to run a really quite large subset of Mac software after each transition. Virtual machines are reasonably straightforward engineering at this point. Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine—and they’re quite robust these days with very small performance penalties. All these are certainly very large software engineering projects—but they’re routine engineering, not megaprojects, at this stage.
Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. Probably small changes in the physics model correspond to small changes in temperature, chemical content, etc. And an emulation that’s as good as a slightly feverish and drunk person would still be impressive and even useful.
No they didn’t. At least one of those times was actually the software I described above, bought from the company I worked for. So I know exactly how hard it was to create.
“Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine”—which the engineers themselves get to specify, design and implement,
“Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. ”
I would love to live in a world where re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a ‘minor detail of underlying platform’. I live i this one, however.
re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a ‘minor detail of underlying platform’. I live i this one, however.
I had assumed we were talking about low-level emulation: the program explicitly models each neuron, and probably at a lower level than that. And physical simulation is a well understood problem and my impression is that the chemists are pretty good at it.
Trying to do some clever white-box reimplementation of the algorithm I agree is probably intractable or worse. The emulation will be very far from the optimal implementation of the mind-program in question.
On the other hand, the only inventions of any significance made between between 1930 and 2012 were personal computers, antibiotics, and nuclear weapons.
Just taking ‘invention’ in terms of physically existent technology (where algorithms etc or new processes don’t count) that people experience in their everyday life—The laser, the transistor, MRI scanners, genetic engineering, the jet engine, the mobile phone, nylon, video recording, electrical amplification of musical instruments, electronic instruments, artificial pacemakers...
Add in vast improvements to previously existing technologies (I think getting people on the moon may have been mildly significant), and scientific breakthroughs that have made whole areas of technology more efficient (information theory, the Turing machine, cybernetics) and those 82 years have been some of the most inventive in human history.
Of the ones you listed, I might grant you the jet engine, which I suppose one could argue was as big an advance in transportation as the railroad, since it let people travel at 700 miles an hour instead of 70 miles an hour.
(Genetic engineering has a lot of potential, but it hasn’t had much of an influence yet. We’ll need another century to really figure out how to take advantage of it—we don’t even know how to make a tree grow into the shape of a house!)
I’ve never been able to figure out what sort of work ems would do once everything available has been turned into computronium. A few of them would do maintenance on the physical substrate, but all I can imagine for the rest is finding ways to steal computational resources from each other.
What are humans doing now that we need only ~2% of the workforce to grow food and ~15% to design and make stuff?
Most of those other people are doing useful tasks, without which people wouldn’t get nearly as much of what they want. If you don’t understand our current economy, you don’t have much of a prayer of understanding future ones.
I didn’t say the rest weren’t doing useful tasks. On the contrary, I meant to imply that if only a fraction of the workforce works on providing subsistence directly and obviously, it doesn’t mean that the rest are useless rent-seekers.
(That said, I probably do have a more pessimistic view than you about the amount of rent-seeking and makework that takes place presently.)
Probably a fair point. One of the things we do is keep each other entertained.
Ems would still need “sensory” stimulation, though part of having a work ethic is not needing a lot of sensory stimulation.
Quite. I also don’t think emulation is going to come anything like as quickly as most people here seem to think. I’ll start to think that maybe emulation might happen in the next couple of centuries the day I see a version of WINE that doesn’t have half the programs that one might want to run in it crashing...
A century is a very long time indeed. Think back to 1912.
I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor.
We had access to the source code of the distro—and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other.
We managed to get the product out of the door, but every single code change—even, at times, changes to non-functional lines of code like comments—would cause major problems (mention the phrase “Java GUI” to me even now, a couple of years later, and I’ll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all.
Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip.
Given my experience of doing even that, I’d say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.
Yes, emulation can be hard. But even so, writing software with the full power of the human brain from scratch seems much harder. If you agree, then you should still expect emulations to be the first AI to arrive.
I disagree. In general I think that once the principles involved are fully understood, writing from scratch a program that performs the same generic tasks as the human brain would be easier than emulating a specific human brain.
In fact I suspect that the code for an AI itself, if one is ever created, will be remarkably compact—possibly the kind of thing that could be knocked up in a few lines of Perl once someone has the correct insights into the remaining problems. AIXI, for example, would be a trivially short program to write, if one had the computing power necessary to make it workable (which is not going to happen, obviously).
My view (and it is mostly a hunch) is that implementing generic intelligence will be a much, much easier task than implementing a copy of a specific intelligence that runs on different hardware, in much the same way that if you’re writing a computer racing game it’s much easier to create an implementation of a car that has only the properties needed for the game than it would be to emulate an entire existing car down to the level of the emissions coming out of the exhaust pipe and a model of the screwed up McDonald’s wrapper under the seat. The latter would be ‘easy’ in the sense of just copying what was there rather than creating something from basic principles, but I doubt it’s something that would be easier to do in practice.
Building emulators is hard. But I think it isn’t quite so hard as that, these days. Apple has now done it twice, and been able to run a really quite large subset of Mac software after each transition. Virtual machines are reasonably straightforward engineering at this point. Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine—and they’re quite robust these days with very small performance penalties. All these are certainly very large software engineering projects—but they’re routine engineering, not megaprojects, at this stage.
Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. Probably small changes in the physics model correspond to small changes in temperature, chemical content, etc. And an emulation that’s as good as a slightly feverish and drunk person would still be impressive and even useful.
″ Apple has now done it twice,”
No they didn’t. At least one of those times was actually the software I described above, bought from the company I worked for. So I know exactly how hard it was to create.
“Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine”—which the engineers themselves get to specify, design and implement,
“Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. ” I would love to live in a world where re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a ‘minor detail of underlying platform’. I live i this one, however.
I had assumed we were talking about low-level emulation: the program explicitly models each neuron, and probably at a lower level than that. And physical simulation is a well understood problem and my impression is that the chemists are pretty good at it.
Trying to do some clever white-box reimplementation of the algorithm I agree is probably intractable or worse. The emulation will be very far from the optimal implementation of the mind-program in question.
On the other hand, the only inventions of any significance made between between 1930 and 2012 were personal computers, antibiotics, and nuclear weapons.
Just taking ‘invention’ in terms of physically existent technology (where algorithms etc or new processes don’t count) that people experience in their everyday life—The laser, the transistor, MRI scanners, genetic engineering, the jet engine, the mobile phone, nylon, video recording, electrical amplification of musical instruments, electronic instruments, artificial pacemakers...
Add in vast improvements to previously existing technologies (I think getting people on the moon may have been mildly significant), and scientific breakthroughs that have made whole areas of technology more efficient (information theory, the Turing machine, cybernetics) and those 82 years have been some of the most inventive in human history.
Of the ones you listed, I might grant you the jet engine, which I suppose one could argue was as big an advance in transportation as the railroad, since it let people travel at 700 miles an hour instead of 70 miles an hour.
Most of what you mentioned wasn’t even as important as the electric washing machine.
(Genetic engineering has a lot of potential, but it hasn’t had much of an influence yet. We’ll need another century to really figure out how to take advantage of it—we don’t even know how to make a tree grow into the shape of a house!)