In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can’t create a face recognition algorithm that matches human performance. We can’t even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don’t understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).
For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can’t create a face recognition algorithm that matches human performance. We can’t even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably).
Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.
The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.
True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.
I would use makerbot instead since the development trajectory is enhanced with thousand of interested makerbot operators who can improve and build upgrade for the printer. UP! 3D printer on the other hand is not open source and a lot more expensive.
I’m confused. You’re saying de novo AGI is harder than brain emulation. That’s debatable (I’d rather not debate it on Less Wrong), but I don’t see how it’s a response to anything I said.
In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can’t create a face recognition algorithm that matches human performance. We can’t even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don’t understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).
Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.
The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.
Clumsy humans have caused plenty of disasters, too. Matching human dexterity with human-quality hardware is not such a high bar.
True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.
I would use makerbot instead since the development trajectory is enhanced with thousand of interested makerbot operators who can improve and build upgrade for the printer. UP! 3D printer on the other hand is not open source and a lot more expensive.
I’m confused. You’re saying de novo AGI is harder than brain emulation. That’s debatable (I’d rather not debate it on Less Wrong), but I don’t see how it’s a response to anything I said.