1) Yes, brains have lots of computational power, but you’ve already accounted for that when you said “human-level AI” in your claim. A human level AI will, with high probability, run at 2x human speed in 18 months, due to Moore’s law, even if we can’t find any optimizations. This speedup by itself is probably sufficient to get a (slow-moving) intelligence explosion.
2) It’s not read access that makes a major difference, it’s write access. Biological humans probably will never have write access to biological brains. Simulated brains or AGIs probably will have or be able to get write access to their own brain. Also, DNA is not the source code to your brain, it’s the source code to the robot that builds your brain. It’s probably not the best tool for understanding the algorithms that make the brain function.
3) As said elsewhere, the question is whether the speed at which you can pick the low hanging fruit dominates the speed at which increased intelligence makes additional fruit low-hanging. I don’t think this has an obviously correct answer either way.
1) I expect to see AI with human-level thought but 100x as slow as you or I first. Moore’s law will probably run out sooner than we get AI, and these days Moore’s law is giving us more cores, not faster ones.
If we indeed find no algorithm that runs drastically faster than the brain, Moore’s law shifting to more cores won’t be a problem because the brain is inherently parallelizable.
I think we just mean different things by “human level”—I wouldn’t consider “human level” thought running at 1/5th the speed of a human or slower to actually be “human level”. You wouldn’t really be able to have a conversation with such a thing.
And as Gurkenglas points out, the human brain is massively parallel—more cores instead of faster cores is actually desirable for this problem.
1) Yes, brains have lots of computational power, but you’ve already accounted for that when you said “human-level AI” in your claim. A human level AI will, with high probability, run at 2x human speed in 18 months, due to Moore’s law, even if we can’t find any optimizations. This speedup by itself is probably sufficient to get a (slow-moving) intelligence explosion.
2) It’s not read access that makes a major difference, it’s write access. Biological humans probably will never have write access to biological brains. Simulated brains or AGIs probably will have or be able to get write access to their own brain. Also, DNA is not the source code to your brain, it’s the source code to the robot that builds your brain. It’s probably not the best tool for understanding the algorithms that make the brain function.
3) As said elsewhere, the question is whether the speed at which you can pick the low hanging fruit dominates the speed at which increased intelligence makes additional fruit low-hanging. I don’t think this has an obviously correct answer either way.
1) I expect to see AI with human-level thought but 100x as slow as you or I first. Moore’s law will probably run out sooner than we get AI, and these days Moore’s law is giving us more cores, not faster ones.
If we indeed find no algorithm that runs drastically faster than the brain, Moore’s law shifting to more cores won’t be a problem because the brain is inherently parallelizable.
I think we just mean different things by “human level”—I wouldn’t consider “human level” thought running at 1/5th the speed of a human or slower to actually be “human level”. You wouldn’t really be able to have a conversation with such a thing.
And as Gurkenglas points out, the human brain is massively parallel—more cores instead of faster cores is actually desirable for this problem.