In order to run a brain without first understanding AI, you have to simulate the brain as a physical object.
This is difficult, the brain is complicated (example). Currently the IBM SyNAPSE project can simulate a lot of nodes in a network that behave sort of like neurons (still 5 orders of magnitude away from even running a realtime network with as many “neurons” as a human brain, never mind as complex), but if these tricky physical interactions have to be simulated as well the problem grows exponentially. So what looks like 5 orders of magnitude is more like… lots. We can probably take some shortcuts, but even subtle changes to the brain can produce things like schizophrenia, so I’m reluctant to estimate. In the hardest case, where you have to treat each neuron as affecting every other neuron via electric fields… what’s the factorial of (# of neurons in the human brain)?
what’s the factorial of (# of neurons in the human brain)?
You have on the order of 10^11 neurons. We can use Stirling’s formula which is a good, quick, approximation for n! to get around 10^(11*10^11 − 4*10^10 + 6)
Note that there’s growing evidence that glial cells play a role in neural interaction. Thus, a total for all neorons is not necessarily an upper bound. However, at the same time, it seems that electric field interactions aren’t that important (humans react pretty normally when in the presence of strong electromagnetic fields unless they are of specific types. So the interaction can’t be that sensitive). Moreover, we know that killing a few neurons doesn’t drastically alter personality, which is a strong argument for not having such complicated interactions.
Not a strong argument—it is known that the brain has a fair bit of redundancy, as demonstrated by the ways parts of damaged brain can be trained to perform tasks the corresponding parts do not handle in healthy brains.
So is AI. If I had to bet, I would give very good odds (70%? Incredibly arbitrary guess) for the hypothesis: “Understanding how a brain works well enough to build something with basically the same behavior is easier (society will do it first) than designing a completely foreign AI.”
Notice, for example, that if our current understanding of physics is correct the amount of time needed to simulate a brain is probably (# of neurons in the brain) * (time required to simulate a neuron in sufficient detail). Nature never deals with complexities like (# of neurons in the brain)!.
Note that I went on to talk about how difficult, which with a Moore’s law progression of computing power gives a timescale of a century to millennia, using our current simulations as a yardstick.
I don’t think “nature never deals with exponential complexities” is a good enough reason why we won’t see them in simulating the brain. It’s a bit dubious to start with (linear complexity isn’t true of planets, why should it be true of neurons?), and porting the brain to the Von Neumann architecture can introduce plenty of things nature never intended. Obviously the timescale cuts off when we have nano-scale engineering good enough to build a brain and not have to port it anywhere, but given the requirements for that I don’t think it will change the probable lower bound of centuries.
In order to run a brain without first understanding AI, you have to simulate the brain as a physical object.
What reasons are there to believe that we can understand intelligence without understanding the brain first? AIXI is to narrow AI as an universal turing machine is to a modern Intel chip. To produce a modern Intel CPU you need a US$2.5 billion chip factory. To produce something like IBM Watson you need a company with a revenue of US$99.870 billion and 426,751 employees to support it. What reasons do you have to believe that in order to develop artificial general intelligence that is capable of explosive recursive self-improvement you need orders of magnitude less resources than to figure out how the brain works? After all the human brain is the only example of an efficient general intelligence that we have.
Because there aren’t any indications that general intelligence is so narrow a category that we have to copy the brain, so the question is “which is faster—normal AI research starting now, or modeling the brain starting later?” Once the brain is understood to some high degree, in order to base an intelligence off of it you get a cheat sheet for most the the decisions of normal AI research, and you still have to implement it computationally, which will be harder than normal AI research. So I think there’s a good chance, though I’m not certain, that normal AI research will be able to make good on its head start and create a self-improving AI first. Both will be faster than simulating a specific human brain, which is what I said would take orders of magnitude more resources.
Another consideration favoring normal AI over whole brain emulation is that evolution finds local optima. It may be possible to exceed the brain’s effectiveness or efficiency at some intellectual tasks by using a radically different architecture.
Yes, that is about the correct answer to this question. We can see that emulations of scanned brains won’t come first, since they require more advanced technology and understanding to develop. The same situation as with scanning birds—broadly speaking.
In order to run a brain without first understanding AI, you have to simulate the brain as a physical object.
This is difficult, the brain is complicated (example). Currently the IBM SyNAPSE project can simulate a lot of nodes in a network that behave sort of like neurons (still 5 orders of magnitude away from even running a realtime network with as many “neurons” as a human brain, never mind as complex), but if these tricky physical interactions have to be simulated as well the problem grows exponentially. So what looks like 5 orders of magnitude is more like… lots. We can probably take some shortcuts, but even subtle changes to the brain can produce things like schizophrenia, so I’m reluctant to estimate. In the hardest case, where you have to treat each neuron as affecting every other neuron via electric fields… what’s the factorial of (# of neurons in the human brain)?
You have on the order of 10^11 neurons. We can use Stirling’s formula which is a good, quick, approximation for n! to get around 10^(11*10^11 − 4*10^10 + 6)
Note that there’s growing evidence that glial cells play a role in neural interaction. Thus, a total for all neorons is not necessarily an upper bound. However, at the same time, it seems that electric field interactions aren’t that important (humans react pretty normally when in the presence of strong electromagnetic fields unless they are of specific types. So the interaction can’t be that sensitive). Moreover, we know that killing a few neurons doesn’t drastically alter personality, which is a strong argument for not having such complicated interactions.
Not a strong argument—it is known that the brain has a fair bit of redundancy, as demonstrated by the ways parts of damaged brain can be trained to perform tasks the corresponding parts do not handle in healthy brains.
So is AI. If I had to bet, I would give very good odds (70%? Incredibly arbitrary guess) for the hypothesis: “Understanding how a brain works well enough to build something with basically the same behavior is easier (society will do it first) than designing a completely foreign AI.”
Notice, for example, that if our current understanding of physics is correct the amount of time needed to simulate a brain is probably (# of neurons in the brain) * (time required to simulate a neuron in sufficient detail). Nature never deals with complexities like (# of neurons in the brain)!.
Note that I went on to talk about how difficult, which with a Moore’s law progression of computing power gives a timescale of a century to millennia, using our current simulations as a yardstick.
I don’t think “nature never deals with exponential complexities” is a good enough reason why we won’t see them in simulating the brain. It’s a bit dubious to start with (linear complexity isn’t true of planets, why should it be true of neurons?), and porting the brain to the Von Neumann architecture can introduce plenty of things nature never intended. Obviously the timescale cuts off when we have nano-scale engineering good enough to build a brain and not have to port it anywhere, but given the requirements for that I don’t think it will change the probable lower bound of centuries.
Are you saying Moore’s law will keep working for centuries or millennia? You can only make transistors so small.
Also, the capital cost has been increasing exponentially.
Definitely not, but it’s reasonable in the near future and probably an upper bound in the father future.
What reasons are there to believe that we can understand intelligence without understanding the brain first? AIXI is to narrow AI as an universal turing machine is to a modern Intel chip. To produce a modern Intel CPU you need a US$2.5 billion chip factory. To produce something like IBM Watson you need a company with a revenue of US$99.870 billion and 426,751 employees to support it. What reasons do you have to believe that in order to develop artificial general intelligence that is capable of explosive recursive self-improvement you need orders of magnitude less resources than to figure out how the brain works? After all the human brain is the only example of an efficient general intelligence that we have.
Because there aren’t any indications that general intelligence is so narrow a category that we have to copy the brain, so the question is “which is faster—normal AI research starting now, or modeling the brain starting later?” Once the brain is understood to some high degree, in order to base an intelligence off of it you get a cheat sheet for most the the decisions of normal AI research, and you still have to implement it computationally, which will be harder than normal AI research. So I think there’s a good chance, though I’m not certain, that normal AI research will be able to make good on its head start and create a self-improving AI first. Both will be faster than simulating a specific human brain, which is what I said would take orders of magnitude more resources.
Another consideration favoring normal AI over whole brain emulation is that evolution finds local optima. It may be possible to exceed the brain’s effectiveness or efficiency at some intellectual tasks by using a radically different architecture.
Yes, that is about the correct answer to this question. We can see that emulations of scanned brains won’t come first, since they require more advanced technology and understanding to develop. The same situation as with scanning birds—broadly speaking.
I am not sure what you were going for here—but FWIW, AIXI is pretty general.