Most models I’ve contemplated indicate that uploading is not going to happen unless it’s the result of FAI. (Neuromorphic uFAI would happen long before uploads which are technically way more difficult unless there’s some serious governmental regulation going on that would take more than years to implement.) This then becomes a question of the conjunction of probabilities that an FAI would ‘upload’ humans in any meaningful sense and that FAI will be developed in less than a decade. I would bet pretty heavily against that conjunction, but at any rate it’s not really that fun of a prediction to speculate about so much as its constituent parts individually.
Huh, I got the opposite impression—that the timeline for brain emulation was less uncertain than the timeline for AI. Our brain-scanning capacity is getting better and better, and once the resolution is high enough to get individual neurons and their connections, we can “just” make a computer model of that, run it, et voila, you got brain emulation!
There are some difficulties, but these seem to require less of a conceptual breakthrough than AI or (especially) FAI do. It’s possible that some of it is technically impossible (maybe we can’t get the resolution needed to track individual dendrites), or that some bits of neuron interaction are trickier than we thought.
Our brain-scanning capacity is getting better and better, and once the resolution is high enough to get individual neurons and their connections, we can “just” make a computer model of that, run it, et voila, you got brain emulation!
Some simple organisms have had their entire brains completely mapped, yet as far as I know, no-one has done a whole-brain emulation of them. If anyone knows to the contrary I’d be interested in a reference, but if not, why not? If someone thinks they know how such a system works, then building a working model is the obvious test to perform.
Some simple organisms have had their entire brains completely mapped, yet as far as I know, no-one has done a whole-brain emulation of them. If anyone knows to the contrary I’d be interested in a reference, but if not, why not? If someone thinks they know how such a system works, then building a working model is the obvious test to perform.
Having something mapped is not the same thing as knowing how it works.
Having something mapped is not the same thing as knowing how it works.
Quite, but the comment I was replying to spoke only of mapping individual neurons and their connections. This has been done, and presumably a lot is known about what they do. But it appears not to be enough. In view of that, what will it really take to do a whole human brain emulation? Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
I’m not convinced that anyone actually did any work towards the nematode upload project. Who would fund it? I did hear recently the claim that it was tried and failed, but I haven’t seen any evidence. ETA: at Nick’s link, David says that there hasn’t been any work since 2001. The work I saw from 2001 looked just like a proposal. He also mentions (at github) a paper from 2005 that is relevant, but not, I think, simulation.
Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
Just because Markram isn’t doing the obvious thing doesn’t mean he is a fraud. Funding agencies and journalists aren’t suspicious, so there’s no incentive to work on non-sexy projects. It should make you nervous that he might fool himself, but he might not; he certainly believes he has other checks.
I heard a rumor that there has been renewed interest in the nematode upload project, but I don’t have a reference. ETA: this was probably what Nick links to.
But it appears not to be enough. In view of that, what will it really take to do a whole human brain emulation? Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
A lot, I expect, and yes. And I expect all sorts of difficulties to work through in the first attempts (giving rise to both ethical and existential difficulties).
Huh, I got the opposite impression—that the timeline for brain emulation was less uncertain than the timeline for AI.
It is less uncertain, but be careful to distinguish between uploads and emulation. Emulation just takes being able to scan at a sufficient level to get something brain-like; uploading requires getting sufficient resolution to get actual personalities and the like. It’s intuitively probable that you can get dangerous neuromorphic AI via emulation before you can get a fully emulated previously in-the-flesh human specific simulation that would count as an ‘upload’. But I don’t have a strong technical argument for that proposition. Perhaps the Whole Brain Emulation Roadmap (PDF) would have more to say.
In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can’t create a face recognition algorithm that matches human performance. We can’t even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don’t understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).
For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can’t create a face recognition algorithm that matches human performance. We can’t even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably).
Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.
The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.
True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.
I would use makerbot instead since the development trajectory is enhanced with thousand of interested makerbot operators who can improve and build upgrade for the printer. UP! 3D printer on the other hand is not open source and a lot more expensive.
I’m confused. You’re saying de novo AGI is harder than brain emulation. That’s debatable (I’d rather not debate it on Less Wrong), but I don’t see how it’s a response to anything I said.
In my mind the distance between the resolution necessary to make something brain-like and functional, and the resolution necessary to make a perfect copy of the target brain is not very large—at least, not large enough to have a big difference in expected time of arrival.
By analogy to a computer: once you can scan and copy a computer well enough for the copy to function, you’re not very far from being able to make a copy that’s functionally equivalent.
By analogy to a computer: once you can scan and copy a computer well enough for the copy to function, you’re not very far from being able to make a copy that’s functionally equivalent.
Bearing in mind that we created computers in such a way that copying is easy. And we created them digital and use checksums.
In my mind the distance between the resolution necessary to make something brain-like and functional, and the resolution necessary to make a perfect copy of the target brain is not very large—at least, not large enough to have a big difference in expected time of arrival.
Even given that the technology is being created by the same species that takes decades to weed out bugs in something as (relatively) trivial as a computer operating system?
I got the opposite impression—that the timeline for brain emulation was less uncertain than the timeline for AI.
My understanding is that—if we can’t find even a single shortcut, whole human brain emulation will produce machine intelligence… eventually. However, engineering-based approaches seem highly likely to beat that path by a considerable margin.
Aeroplanes are not scanned birds, submarines are not scanned fish, cars are not scanned horses—and so on, and so forth.
As far as I can tell, whole human brain emulation as a route to machine intelligence is an approach that is based almost entirely upon wishful thinking by philosophers.
Most models I’ve contemplated indicate that uploading is not going to happen unless it’s the result of FAI. (Neuromorphic uFAI would happen long before uploads which are technically way more difficult unless there’s some serious governmental regulation going on that would take more than years to implement.) This then becomes a question of the conjunction of probabilities that an FAI would ‘upload’ humans in any meaningful sense and that FAI will be developed in less than a decade. I would bet pretty heavily against that conjunction, but at any rate it’s not really that fun of a prediction to speculate about so much as its constituent parts individually.
Huh, I got the opposite impression—that the timeline for brain emulation was less uncertain than the timeline for AI. Our brain-scanning capacity is getting better and better, and once the resolution is high enough to get individual neurons and their connections, we can “just” make a computer model of that, run it, et voila, you got brain emulation!
There are some difficulties, but these seem to require less of a conceptual breakthrough than AI or (especially) FAI do. It’s possible that some of it is technically impossible (maybe we can’t get the resolution needed to track individual dendrites), or that some bits of neuron interaction are trickier than we thought.
Some simple organisms have had their entire brains completely mapped, yet as far as I know, no-one has done a whole-brain emulation of them. If anyone knows to the contrary I’d be interested in a reference, but if not, why not? If someone thinks they know how such a system works, then building a working model is the obvious test to perform.
Having something mapped is not the same thing as knowing how it works.
Quite, but the comment I was replying to spoke only of mapping individual neurons and their connections. This has been done, and presumably a lot is known about what they do. But it appears not to be enough. In view of that, what will it really take to do a whole human brain emulation? Until it has been demonstrated in, say, C. elegans, it is so much moonshine.
I’m not convinced that anyone actually did any work towards the nematode upload project. Who would fund it? I did hear recently the claim that it was tried and failed, but I haven’t seen any evidence. ETA: at Nick’s link, David says that there hasn’t been any work since 2001. The work I saw from 2001 looked just like a proposal. He also mentions (at github) a paper from 2005 that is relevant, but not, I think, simulation.
Just because Markram isn’t doing the obvious thing doesn’t mean he is a fraud. Funding agencies and journalists aren’t suspicious, so there’s no incentive to work on non-sexy projects. It should make you nervous that he might fool himself, but he might not; he certainly believes he has other checks.
I heard a rumor that there has been renewed interest in the nematode upload project, but I don’t have a reference. ETA: this was probably what Nick links to.
David Dalrymple is beginning a project. (github)
A lot, I expect, and yes. And I expect all sorts of difficulties to work through in the first attempts (giving rise to both ethical and existential difficulties).
It is less uncertain, but be careful to distinguish between uploads and emulation. Emulation just takes being able to scan at a sufficient level to get something brain-like; uploading requires getting sufficient resolution to get actual personalities and the like. It’s intuitively probable that you can get dangerous neuromorphic AI via emulation before you can get a fully emulated previously in-the-flesh human specific simulation that would count as an ‘upload’. But I don’t have a strong technical argument for that proposition. Perhaps the Whole Brain Emulation Roadmap (PDF) would have more to say.
In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can’t create a face recognition algorithm that matches human performance. We can’t even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don’t understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).
Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.
The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.
Clumsy humans have caused plenty of disasters, too. Matching human dexterity with human-quality hardware is not such a high bar.
True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.
I would use makerbot instead since the development trajectory is enhanced with thousand of interested makerbot operators who can improve and build upgrade for the printer. UP! 3D printer on the other hand is not open source and a lot more expensive.
I’m confused. You’re saying de novo AGI is harder than brain emulation. That’s debatable (I’d rather not debate it on Less Wrong), but I don’t see how it’s a response to anything I said.
In my mind the distance between the resolution necessary to make something brain-like and functional, and the resolution necessary to make a perfect copy of the target brain is not very large—at least, not large enough to have a big difference in expected time of arrival.
By analogy to a computer: once you can scan and copy a computer well enough for the copy to function, you’re not very far from being able to make a copy that’s functionally equivalent.
Bearing in mind that we created computers in such a way that copying is easy. And we created them digital and use checksums.
Even given that the technology is being created by the same species that takes decades to weed out bugs in something as (relatively) trivial as a computer operating system?
My understanding is that—if we can’t find even a single shortcut, whole human brain emulation will produce machine intelligence… eventually. However, engineering-based approaches seem highly likely to beat that path by a considerable margin.
Aeroplanes are not scanned birds, submarines are not scanned fish, cars are not scanned horses—and so on, and so forth.
As far as I can tell, whole human brain emulation as a route to machine intelligence is an approach that is based almost entirely upon wishful thinking by philosophers.