It seems plausible that similar issues could occur when emulating a human brain. But if they do, wouldn’t it be probable that they could be resolved with a simple increase in processing power? (Or possibly by buffering of sensory input.)
If you read the article, you’ll see the answer is simply “no”. The whole point of the article is that throwing more resources at the problem doesn’t, by itself, increase ease of emulation.
I spent three years working on a product whose only function was to take binaries compiled for a GNU/Linux distro on one CPU, and make them runnable on the same distro but another CPU. Having seen how difficult this is to do even when you’re talking about the same OS, to which you have the source code, and two human-designed Von Neuman architecture chips, I know that ‘uploading’ will take far, far, far more effort than most people on this site currently believe.
It seems plausible that similar issues could occur when emulating a human brain. But if they do, wouldn’t it be probable that they could be resolved with a simple increase in processing power? (Or possibly by buffering of sensory input.)
If you read the article, you’ll see the answer is simply “no”. The whole point of the article is that throwing more resources at the problem doesn’t, by itself, increase ease of emulation.
I spent three years working on a product whose only function was to take binaries compiled for a GNU/Linux distro on one CPU, and make them runnable on the same distro but another CPU. Having seen how difficult this is to do even when you’re talking about the same OS, to which you have the source code, and two human-designed Von Neuman architecture chips, I know that ‘uploading’ will take far, far, far more effort than most people on this site currently believe.