Quite the strawman you’re attacking here, Eliezer. Where are all these AI researchers who think just tossing a whole bunch of (badly simulated) neurons into a vat will produce human-like intelligence?
There are lots of people trying to figure out how to use simulated neurons as building blocks to solve various sorts of problems. Some of them use totally non-biological neuron models, some use more accurate models. In either case, what’s wrong with saying: “The brain uses this sort of doohickey to do all sorts of really powerful computation. Let’s play around with a few of them and see what sort of computational problems we can tackle.”
Then, from the other end, there’s the Blue Brain project, saying “let’s build an accurate simulation of a brain, starting from a handful on neurons and working our way up, making sure at every step that our simulation responds to stimulation just like the real thing. Maybe then we can reverse-engineer how the brain is doing its thing.” When their simulations deviate from the real thing, they run more tests on the real thing to figure out where they’re going wrong. Will they succeed before someone else builds an AI and/or funding runs out? Maybe, maybe not; but they’re making useful contributions already.
Elizier: “the data elements you call ‘neurons’ are nothing like biological neurons. They resemble them the way that a ball bearing resembles a foot.”
A model of a spiking neuron that keeps track of multiple input compartments on the dendrites and a handful of ion channels is accurate enough to duplicate the response of a real live neuron. That’s basically the model that Blue Brain is using. (Or perhaps I misread your analogy, and you’re just complaining about your terrible orthopedic problems?)
I’m not saying that neurons or brain simulation are the One True Way to AI; I agree that a more engineered solution is likely to work first, mostly because biological systems tend to have horrible interdependencies everywhere that make them ridiculously hard to reverse-engineer. But I don’t think that’s a reason to sling mud at the people who step up to do that reverse engineering anyway.
Eh, I guess this response belongs on some AI mailing list and not here. Oh well.
Quite the strawman you’re attacking here, Eliezer. Where are all these AI researchers who think just tossing a whole bunch of (badly simulated) neurons into a vat will produce human-like intelligence?
There are lots of people trying to figure out how to use simulated neurons as building blocks to solve various sorts of problems. Some of them use totally non-biological neuron models, some use more accurate models. In either case, what’s wrong with saying: “The brain uses this sort of doohickey to do all sorts of really powerful computation. Let’s play around with a few of them and see what sort of computational problems we can tackle.”
Then, from the other end, there’s the Blue Brain project, saying “let’s build an accurate simulation of a brain, starting from a handful on neurons and working our way up, making sure at every step that our simulation responds to stimulation just like the real thing. Maybe then we can reverse-engineer how the brain is doing its thing.” When their simulations deviate from the real thing, they run more tests on the real thing to figure out where they’re going wrong. Will they succeed before someone else builds an AI and/or funding runs out? Maybe, maybe not; but they’re making useful contributions already.
Elizier: “the data elements you call ‘neurons’ are nothing like biological neurons. They resemble them the way that a ball bearing resembles a foot.”
A model of a spiking neuron that keeps track of multiple input compartments on the dendrites and a handful of ion channels is accurate enough to duplicate the response of a real live neuron. That’s basically the model that Blue Brain is using. (Or perhaps I misread your analogy, and you’re just complaining about your terrible orthopedic problems?)
I’m not saying that neurons or brain simulation are the One True Way to AI; I agree that a more engineered solution is likely to work first, mostly because biological systems tend to have horrible interdependencies everywhere that make them ridiculously hard to reverse-engineer. But I don’t think that’s a reason to sling mud at the people who step up to do that reverse engineering anyway.
Eh, I guess this response belongs on some AI mailing list and not here. Oh well.