I’m curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.’s advancement (or lack thereof)?
I’m also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?
Based on the current understanding of quantum algorithms, I think the smart money is on a quadratic (or sub-quadratic) speedup from quantum computers on most tasks of interest for machine learning. That is, rather than taking N^2 time to solve a problem, it can be done in N time. This is true for unstructured search and now for an increasing range of problems that will quite possibly include the kind of local search that is the computational bottleneck in much modern machine learning. Much of the work of serious quantum algorithms people is spreading this quadratic speedup to more problems.
In the very long run quantum computers will also be able to go slightly further than classical computers before they run into fundamental hardware limits (this is beyond the quadratic speedup). I think they should not be considered as fundamentally different than other speculative technologies that could allow much faster computing; their main significance is increasing our confidence that the future will have much cheaper computation.
I think what you should expect to see is a long period of dominance by classical computers, followed eventually by a switching point where quantum computers pass their classical analogs. In principle you might see faster progress after this switching point (if you double the size of your quantum computer, you can do a brute force search that is 4 times as large, as opposed to twice as large with a classical computer), but more likely this would be dwarfed by other differences which can have much more than a factor of 2 effect on the rate of progress. This looks likely to happen long after growth has slowed for the current approaches to building cheaper classical computers.
For domains that experience the full quadratic speedup, I think this would allow us to do brute force searches something like 10-20 orders of magnitude larger before hitting fundamental physical limits.
Note that D-wave and its ilk are unlikely to be relevant to this story; we are a good ways off yet. I would even go further and bet on essentially universal quantum computing before such machines become useful in AI research, though I am less confident about that one.
I’ve worked on the D-Wave machine (in that I’ve run algorithms on it—I haven’t actually contributed to the design of the hardware). About that machine, I have no idea if it’s eventually going to be a huge deal faster than conventional hardware. It’s an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment.
About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you’d need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I’d put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven’t actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many.
All indications seem to be that by 2064 we’re likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.
Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil “How to Create a Mind”). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy.
Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns.
IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.
Katja, that’s a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you’re hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I’m an optimist) with general quantum computing it seems we need much huger breakthroughs.
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also … later here, earlier, there.
FTA: “The problem is decoherence… In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet… useful quantum computers might still be decades away”
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn’t achieve Y and Z. That’s not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.
I’ve seen several papers like “Quantum speedup for unsupervised learning” but I don’t know enough about quantum algorithms to have an opinion on the question, really.
Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I’l post my impression, if I have anything worthwhile to say, either here in Katja’s group, or up top on lw generally, when I have time to read more of it.
Did you read about Google’s partnership with NASA and UCSD to build a quantum computer of 1000 qubits?
Technologically exciting, but … imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.
That would be catastrophic, for business, economies, governments, individuals, every form of commerce, military communication....
Didn’t answer your question, I am sorry, but as a “fan” of quantum computing, and also a person with a long time interest in the quantum zeno effect, free will, and the implications for consciousness (as often discussed by Henry Stapp, among others), I am both excited, yet feel a certain trepidation. Like I do about nanotech.
I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms, and I am even more certain that it will accellerate superintelligence (which is not at all the same thing, as intelligence and consciousness, in my opinion, are not coextensive.)
Did you read about Google’s partnership with NASA and UCSD to build a quantum computer of 1000 qubits?
Technologically exciting, but … imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.
My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key systems—Kerberos doesn’t require public-key authentication. And you can sign things with Merkle signatures.
Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.
If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.
A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.
I also worry about the legacy problem… all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those “eyes only” critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being “looked at” or opportunistically copied in their limbo state between old and new encrypted status?
Who can we trust to do all this conversion, even given the new algorithms are developed?
This is actually almost intractably messy, at first glance.
I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms
What do you mean with artificial consciousness to the extend that it’s not intelligence and why do you think the problem is in a form where quantum computers are helpful?
Which specific mathematical problems do you think are important for artificial consciousness that are better solved via quantum computers than our current computers?
What do you mean with artificial consciousness to the extend that it’s not intelligence and why do you think the problem is in a form where quantum computers are helpful?
The claim wasn’t that artifactual consciousness wasn’t (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)
I needn’t have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose “consciousness”) get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal “amnesia” to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. )
Moral: intelligent behavior occurs, consciousness doesn’t. Thus, not coextensive. I have other arguments, also.
As to your second question, I’ll have to defer an answer for now, because it would be copiously long… though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we’re through with this forum.
I have dozens, some of them so good I have actually printed hardcopies of the PDFs—sometimes misplacing the DOIs in the process.
I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between “consciousness” and other functions. I have a particularly interesting one (74 pages, but it’s a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.
If we are in a different thread string in a couple days, I will flag you. I’d like to pick a couple of good ones, so it will take a little re-reading.
I think it is that kind of thing that we should start thinking about though. Its the consequences that we have to worry about as much as developing the tech. Too often times new things have been created and people have not been mindful of the consequences of their actions. I welcome the discussion.
The D-Wave quantum computer solves a general class of optimization problems very quickly. It cannot speed up any arbitrary computing task, but the class of computing problems which include an optimization task it can speed up appears to be large.
Many “AI Planning” tasks will be a lot faster with quantum computers. It would be interesting to learn what the impact of quantum computing will be on other specific AI domains like NLP and object recognition.
And lithography, or printing, just keeps getting faster on smaller and smaller objects and is going from 2d to 3d.
When Bostrom starts to talk about it, I would like to hear people’s opinions about untangling the importance of hardware vs. software in the future development of AI.
I’m curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.’s advancement (or lack thereof)?
I’m also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?
Based on the current understanding of quantum algorithms, I think the smart money is on a quadratic (or sub-quadratic) speedup from quantum computers on most tasks of interest for machine learning. That is, rather than taking N^2 time to solve a problem, it can be done in N time. This is true for unstructured search and now for an increasing range of problems that will quite possibly include the kind of local search that is the computational bottleneck in much modern machine learning. Much of the work of serious quantum algorithms people is spreading this quadratic speedup to more problems.
In the very long run quantum computers will also be able to go slightly further than classical computers before they run into fundamental hardware limits (this is beyond the quadratic speedup). I think they should not be considered as fundamentally different than other speculative technologies that could allow much faster computing; their main significance is increasing our confidence that the future will have much cheaper computation.
I think what you should expect to see is a long period of dominance by classical computers, followed eventually by a switching point where quantum computers pass their classical analogs. In principle you might see faster progress after this switching point (if you double the size of your quantum computer, you can do a brute force search that is 4 times as large, as opposed to twice as large with a classical computer), but more likely this would be dwarfed by other differences which can have much more than a factor of 2 effect on the rate of progress. This looks likely to happen long after growth has slowed for the current approaches to building cheaper classical computers.
For domains that experience the full quadratic speedup, I think this would allow us to do brute force searches something like 10-20 orders of magnitude larger before hitting fundamental physical limits.
Note that D-wave and its ilk are unlikely to be relevant to this story; we are a good ways off yet. I would even go further and bet on essentially universal quantum computing before such machines become useful in AI research, though I am less confident about that one.
I’ve worked on the D-Wave machine (in that I’ve run algorithms on it—I haven’t actually contributed to the design of the hardware). About that machine, I have no idea if it’s eventually going to be a huge deal faster than conventional hardware. It’s an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment.
About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you’d need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I’d put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven’t actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many.
All indications seem to be that by 2064 we’re likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.
Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil “How to Create a Mind”). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy. Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns. IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.
What are the indications you have in mind?
Katja, that’s a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you’re hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I’m an optimist) with general quantum computing it seems we need much huger breakthroughs.
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also … later here, earlier, there.
Scott Aaronson seems to disagree: http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?_r=3&ref=science&pagewanted=all&
FTA: “The problem is decoherence… In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet… useful quantum computers might still be decades away”
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Thanks for your link to the nyt article.
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn’t achieve Y and Z. That’s not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.
I’ve seen several papers like “Quantum speedup for unsupervised learning” but I don’t know enough about quantum algorithms to have an opinion on the question, really.
Another paper I haven’t read: “Can artificial intelligence benefit from quantum computing?”
Luke,
Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I’l post my impression, if I have anything worthwhile to say, either here in Katja’s group, or up top on lw generally, when I have time to read more of it.
Did you read about Google’s partnership with NASA and UCSD to build a quantum computer of 1000 qubits?
Technologically exciting, but … imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.
That would be catastrophic, for business, economies, governments, individuals, every form of commerce, military communication....
Didn’t answer your question, I am sorry, but as a “fan” of quantum computing, and also a person with a long time interest in the quantum zeno effect, free will, and the implications for consciousness (as often discussed by Henry Stapp, among others), I am both excited, yet feel a certain trepidation. Like I do about nanotech.
I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms, and I am even more certain that it will accellerate superintelligence (which is not at all the same thing, as intelligence and consciousness, in my opinion, are not coextensive.)
My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key systems—Kerberos doesn’t require public-key authentication. And you can sign things with Merkle signatures.
There are also a number of candidate public-key cryptosystems that are believed secure against quantum attacks.
So I think we shouldn’t be too apocalyptic here.
Asr,
Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.
If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.
A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.
I also worry about the legacy problem… all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those “eyes only” critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being “looked at” or opportunistically copied in their limbo state between old and new encrypted status?
Who can we trust to do all this conversion, even given the new algorithms are developed?
This is actually almost intractably messy, at first glance.
What do you mean with artificial consciousness to the extend that it’s not intelligence and why do you think the problem is in a form where quantum computers are helpful? Which specific mathematical problems do you think are important for artificial consciousness that are better solved via quantum computers than our current computers?
What do you mean with artificial consciousness to the extend that it’s not intelligence and why do you think the problem is in a form where quantum computers are helpful?
The claim wasn’t that artifactual consciousness wasn’t (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)
I needn’t have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose “consciousness”) get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal “amnesia” to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. ) Moral: intelligent behavior occurs, consciousness doesn’t. Thus, not coextensive. I have other arguments, also.
As to your second question, I’ll have to defer an answer for now, because it would be copiously long… though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we’re through with this forum.
Is there an academic paper that makes that argument? If so, could you reference it?
I have dozens, some of them so good I have actually printed hardcopies of the PDFs—sometimes misplacing the DOIs in the process.
I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between “consciousness” and other functions. I have a particularly interesting one (74 pages, but it’s a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.
If we are in a different thread string in a couple days, I will flag you. I’d like to pick a couple of good ones, so it will take a little re-reading.
I think it is that kind of thing that we should start thinking about though. Its the consequences that we have to worry about as much as developing the tech. Too often times new things have been created and people have not been mindful of the consequences of their actions. I welcome the discussion.
The D-Wave quantum computer solves a general class of optimization problems very quickly. It cannot speed up any arbitrary computing task, but the class of computing problems which include an optimization task it can speed up appears to be large.
Many “AI Planning” tasks will be a lot faster with quantum computers. It would be interesting to learn what the impact of quantum computing will be on other specific AI domains like NLP and object recognition.
We also have:
-Reversible computing -Analog computing -Memristors -Optical computing -Superconductors -Self-assembling materials
And lithography, or printing, just keeps getting faster on smaller and smaller objects and is going from 2d to 3d.
When Bostrom starts to talk about it, I would like to hear people’s opinions about untangling the importance of hardware vs. software in the future development of AI.