Katja, that’s a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you’re hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I’m an optimist) with general quantum computing it seems we need much huger breakthroughs.
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also … later here, earlier, there.
FTA: “The problem is decoherence… In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet… useful quantum computers might still be decades away”
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn’t achieve Y and Z. That’s not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.
Katja, that’s a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you’re hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I’m an optimist) with general quantum computing it seems we need much huger breakthroughs.
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also … later here, earlier, there.
Scott Aaronson seems to disagree: http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?_r=3&ref=science&pagewanted=all&
FTA: “The problem is decoherence… In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet… useful quantum computers might still be decades away”
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Thanks for your link to the nyt article.
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn’t achieve Y and Z. That’s not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.