So is AI. If I had to bet, I would give very good odds (70%? Incredibly arbitrary guess) for the hypothesis: “Understanding how a brain works well enough to build something with basically the same behavior is easier (society will do it first) than designing a completely foreign AI.”
Notice, for example, that if our current understanding of physics is correct the amount of time needed to simulate a brain is probably (# of neurons in the brain) * (time required to simulate a neuron in sufficient detail). Nature never deals with complexities like (# of neurons in the brain)!.
Note that I went on to talk about how difficult, which with a Moore’s law progression of computing power gives a timescale of a century to millennia, using our current simulations as a yardstick.
I don’t think “nature never deals with exponential complexities” is a good enough reason why we won’t see them in simulating the brain. It’s a bit dubious to start with (linear complexity isn’t true of planets, why should it be true of neurons?), and porting the brain to the Von Neumann architecture can introduce plenty of things nature never intended. Obviously the timescale cuts off when we have nano-scale engineering good enough to build a brain and not have to port it anywhere, but given the requirements for that I don’t think it will change the probable lower bound of centuries.
So is AI. If I had to bet, I would give very good odds (70%? Incredibly arbitrary guess) for the hypothesis: “Understanding how a brain works well enough to build something with basically the same behavior is easier (society will do it first) than designing a completely foreign AI.”
Notice, for example, that if our current understanding of physics is correct the amount of time needed to simulate a brain is probably (# of neurons in the brain) * (time required to simulate a neuron in sufficient detail). Nature never deals with complexities like (# of neurons in the brain)!.
Note that I went on to talk about how difficult, which with a Moore’s law progression of computing power gives a timescale of a century to millennia, using our current simulations as a yardstick.
I don’t think “nature never deals with exponential complexities” is a good enough reason why we won’t see them in simulating the brain. It’s a bit dubious to start with (linear complexity isn’t true of planets, why should it be true of neurons?), and porting the brain to the Von Neumann architecture can introduce plenty of things nature never intended. Obviously the timescale cuts off when we have nano-scale engineering good enough to build a brain and not have to port it anywhere, but given the requirements for that I don’t think it will change the probable lower bound of centuries.
Are you saying Moore’s law will keep working for centuries or millennia? You can only make transistors so small.
Also, the capital cost has been increasing exponentially.
Definitely not, but it’s reasonable in the near future and probably an upper bound in the father future.