Moore’s Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore’s law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That’s a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn’t be surprising if it kept going for some time longer.
AI academia was Great Stagnating (this is relatively easy to believe)
The so-called “Great Stagnation” isn’t actually a stagnation, it’s mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?
All the Foresight people were really really optimistic about nanotech
Haven’t they been so far?
In any case, nanotechnology can’t shrink feature sizes below atomic scale, and that’s already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren’t obviously that helpful.
perhaps the press releases are exaggerated
Could you give some examples of what you had in mind?
Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etc.
Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.
Biotech stays regulation-locked forever—not too hard to believe.
Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That’s a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.
Anders Sandberg is wrong about basically everything to do with uploading.
See 15:30 of this talk, Anders’ Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.
You’re the first halfway-sane person I’ve ever heard put the median at 2100.
vs
I didn’t say 87 years, but closer to 87 than 32 (or 16, for Kurzweil’s prediction of a Turing-Test passing AI).
I said “near the end of the century” contrasted to a prediction of intelligence explosion in 2045.
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards.
A very tangential point, but in his 1998 book Robot, Hans Moravec speculates about atoms made from alternative subatomic particles that are smaller and able to absorb, transmit and emit more energy than the versions made from electrons and protons.
That doesn’t apply to large proteins yet, but it doesn’t make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)
It’s also worth pointing that conventional computers could already solve these particular protein folding problems.
You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?
That seems like an oversimplification. Clearly some people do.
Scott Aaronson:
“I no longer feel like playing an adversarial role. I really, genuinely hope D-Wave succeeds.” That said, he noted that D-Wave still hadn’t provided proof of a critical test of quantum computing.
I am not qualified to judge whether the D-Wave’s claim that they use quantum annealing, rather than the standard simulated annealing (as Scott suspects) in their adiabatic quantum computing is justified. However, the lack of independent replication of their claims is disconcerting.
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore’s law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That’s a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn’t be surprising if it kept going for some time longer.
The so-called “Great Stagnation” isn’t actually a stagnation, it’s mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?
Haven’t they been so far?
In any case, nanotechnology can’t shrink feature sizes below atomic scale, and that’s already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren’t obviously that helpful.
Could you give some examples of what you had in mind?
Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.
Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That’s a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.
See 15:30 of this talk, Anders’ Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.
vs
I said “near the end of the century” contrasted to a prediction of intelligence explosion in 2045.
A very tangential point, but in his 1998 book Robot, Hans Moravec speculates about atoms made from alternative subatomic particles that are smaller and able to absorb, transmit and emit more energy than the versions made from electrons and protons.
Here’s one: http://phys.org/news/2012-08-d-wave-quantum-method-protein-problem.html
That doesn’t apply to large proteins yet, but it doesn’t make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)
http://blogs.nature.com/news/2012/08/d-wave-quantum-computer-solves-protein-folding-problem.html
You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?
ETA: https://plus.google.com/103530621949492999968/posts/U11X8sec1pU
The G+ post explains what it’s good for pretty well, doesn’t it?
It’s not a dramatic improvement (yet), but it’s a larger potential speedup than anything else I’ve seen on the protein-folding problem lately.
You can duplicate that D-Wave machine on a laptop.
True, but somewhat besides the point; it’s the asymptotic speedup that’s interesting.
...you know, assuming the thing actually does what they claim it does. sigh
Also no asymptotic speedup.
Nobody believes in D-Wave.
That seems like an oversimplification. Clearly some people do.
Scott Aaronson:
I am not qualified to judge whether the D-Wave’s claim that they use quantum annealing, rather than the standard simulated annealing (as Scott suspects) in their adiabatic quantum computing is justified. However, the lack of independent replication of their claims is disconcerting.
Maybe they could get Andrea Rossi to confirm.