I’m no expert, but even Kurzweil—who, from past performance, is usually correct but over-optimistic by maybe five, ten years—doesn’t expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.
2020 is in five years. The kind of progress that would seem to imply—from where we are now to full-on human-level AI in just five years—seems incredible.
Kurzweil’s methodology for selecting those dates is suspect. He calculated the number of FLOPS he thinks, by back of the envelope whole-brain-emulation estimates, it would require to run a human-level AGI. The most powerful supercomputer today, Tianhe-2 in China, exceeds this level. So the human race has access to enough computing power to run even an inefficient emulative AGI today, by Kurzweil’s own estimates. The years he quotes are when that computing power would be available for $1,000 USD. If you believe in a takeoff scenario however, it should only matter when the first AGI is created, not how much it costs to buy the equipment to run another one.
So we have sufficient computational power today to run an artificial general intelligence. The problem then, is software. How long will it take to write the software underlying the first AGI? And for whatever value you claim, do you have credible reasoning underlying that choice?
Personally I think 5 years is a bit fast. But the quote was 2020s, the midpoint of which is still 10 years away. I think 10 years is doable if we really, really try. What’s your estimate?
Yikes, but that’s early. That’s a lot sooner than I would have said, even as a reasonable lower bound.
You have a credible reason for thinking it will take longer?
I’m no expert, but even Kurzweil—who, from past performance, is usually correct but over-optimistic by maybe five, ten years—doesn’t expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.
2020 is in five years. The kind of progress that would seem to imply—from where we are now to full-on human-level AI in just five years—seems incredible.
Kurzweil’s methodology for selecting those dates is suspect. He calculated the number of FLOPS he thinks, by back of the envelope whole-brain-emulation estimates, it would require to run a human-level AGI. The most powerful supercomputer today, Tianhe-2 in China, exceeds this level. So the human race has access to enough computing power to run even an inefficient emulative AGI today, by Kurzweil’s own estimates. The years he quotes are when that computing power would be available for $1,000 USD. If you believe in a takeoff scenario however, it should only matter when the first AGI is created, not how much it costs to buy the equipment to run another one.
So we have sufficient computational power today to run an artificial general intelligence. The problem then, is software. How long will it take to write the software underlying the first AGI? And for whatever value you claim, do you have credible reasoning underlying that choice?
Personally I think 5 years is a bit fast. But the quote was 2020s, the midpoint of which is still 10 years away. I think 10 years is doable if we really, really try. What’s your estimate?