HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn’t achieve Y and Z. That’s not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Thanks for your link to the nyt article.
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn’t achieve Y and Z. That’s not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.