1) The past failure of AGI research to deliver progress
2) The apparent difficulty of the problem. We don’t know how to do it, and we don’t know what we would need to know before we can know how to do it. Or, at least, I don’t.
3) My impressions of the speed of scientific progress in general. For example, the time between “new discovery” and “marketable product” in medicine and biotechnology is about 30 years.
4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat’s Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.
5) The difficulty of computer programming in general. People are bad at programming.
Actually, no, but I also expect that it’ll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don’t expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.
The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)
Even at Moore’s Law speeds, simulating 10^11 neurons, 10^11 glial cells, 10^15 synaptic connections, and concentrations of various neurotransmitters and other chemicals in real time or faster-than-real time is going to be expensive for a long time before it becomes cheap.
Not necessarily. If a human brain with no software tricks requires 10^20 CPS (a very high estimate), then (according to Kurzweil, take with grain of salt) the computational capacity will be there by ~2040. However, it’s certainly possible that we don’t get the software until 2050, at which point anyone with a couple hundred dollars can run one.
Depends on which details actually need to be simulated. I suspect that most intracellular activity can be neglected or replaced with some simple rules on when a cell divides, adds a synapse, etc.
For the record, this is something I don’t have much confidence in—WBE requires a sufficiently detailed brain scan, computers of sufficient processing power to run the simulation, and enough knowledge of brains on the microscopic level to program a simulation and understand the output of the simulation. I do not know which will turn out to be the bottleneck in the process.
Most technological developments seem to go from “We don’t know how to do this at all” to “We know how to do this, but actually doing it costs a fortune” to “We know how to do this at an affordable price.” WBE could be an exception, though, and completely skip over the second stage.
What do you think you know and how do you think you know it?
I’m guessing based on several factors:
1) The past failure of AGI research to deliver progress
2) The apparent difficulty of the problem. We don’t know how to do it, and we don’t know what we would need to know before we can know how to do it. Or, at least, I don’t.
3) My impressions of the speed of scientific progress in general. For example, the time between “new discovery” and “marketable product” in medicine and biotechnology is about 30 years.
4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat’s Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.
5) The difficulty of computer programming in general. People are bad at programming.
Do you also evaluate the chances of WBE as being vanishingly slim over the next century?
Actually, no, but I also expect that it’ll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don’t expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.
The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)
So you expect that WBE will become possible before cheap supercomputers?
You might like to quantify “cheap” and “super”.
See reply to CronoDAS below.
Even at Moore’s Law speeds, simulating 10^11 neurons, 10^11 glial cells, 10^15 synaptic connections, and concentrations of various neurotransmitters and other chemicals in real time or faster-than-real time is going to be expensive for a long time before it becomes cheap.
Not necessarily. If a human brain with no software tricks requires 10^20 CPS (a very high estimate), then (according to Kurzweil, take with grain of salt) the computational capacity will be there by ~2040. However, it’s certainly possible that we don’t get the software until 2050, at which point anyone with a couple hundred dollars can run one.
Depends on which details actually need to be simulated. I suspect that most intracellular activity can be neglected or replaced with some simple rules on when a cell divides, adds a synapse, etc.
For the record, this is something I don’t have much confidence in—WBE requires a sufficiently detailed brain scan, computers of sufficient processing power to run the simulation, and enough knowledge of brains on the microscopic level to program a simulation and understand the output of the simulation. I do not know which will turn out to be the bottleneck in the process.
It looks like “enough knowledge of brains on the microscopic level to program a simulation” might be the limiting factor.
In which case, we have a hardware overhang and an explosive em transition.
Most technological developments seem to go from “We don’t know how to do this at all” to “We know how to do this, but actually doing it costs a fortune” to “We know how to do this at an affordable price.” WBE could be an exception, though, and completely skip over the second stage.