Yes, I know. The point is that it seems to be generally accepted that some things (particularly, certain kinds of code breaking) are only likely to only become doable in a realistic amount of time with quantum computing, so some people (I’m not one of them) might think AI is in a similar boat.
We have natural intelligence made of meat, processing by ion currents in liquid. Ion currents in liquid have an extremely short decoherence time, way too short to compute with.
While I doubt AI needs QC, I don’t think this argument works. Your same argument seems to rule out birds exploiting quantum phenomena to navigate, yet they are thought to do so.
There’s a difference between exploiting quantum phenomena and using entanglement. There’s a large set of quantum mechanical behavior which doesn’t really add much computationally. (To some extent this is part of why we don’t call our normal laptops quantum computers even though transistors and hard drives use quantum mechanics to work.)
It’s not possible to discuss “the amount of computations required” without specifying a model of computation. Chris is asking whether an AI might be much slower on a classical computer than a quantum computer, to the extent that it’s practically infeasible unless large scale quantum computing is feasible. This is a perfectly reasonable question to ask and I think your objection must be due to an over-literal interpretation of his post title or some other misunderstanding.
It’s not possible to discuss “the amount of computations required” without specifying a model of computation.
I agree, there are more steps in between “AI is hard” and “we need QC”.
However, from what I understand, those who say “QC is required for AI” just use this “argument” (e.g. “AI is at least as hard as code breaking”) as an excuse to avoid thinking about AI, not as a thoughtful conclusion from analyzing available data.
Yes, I know. The point is that it seems to be generally accepted that some things (particularly, certain kinds of code breaking) are only likely to only become doable in a realistic amount of time with quantum computing, so some people (I’m not one of them) might think AI is in a similar boat.
We have natural intelligence made of meat, processing by ion currents in liquid. Ion currents in liquid have an extremely short decoherence time, way too short to compute with.
Are you arguing with students of Deepak Chopra?
While I doubt AI needs QC, I don’t think this argument works. Your same argument seems to rule out birds exploiting quantum phenomena to navigate, yet they are thought to do so.
There’s a difference between exploiting quantum phenomena and using entanglement. There’s a large set of quantum mechanical behavior which doesn’t really add much computationally. (To some extent this is part of why we don’t call our normal laptops quantum computers even though transistors and hard drives use quantum mechanics to work.)
Precisely. That’s why we shouldn’t be calling our brains ‘quantum’ either...
Or if we do, then that is in no way an argument against our using our current off-the-shelf ‘quantum’ computers!
Entanglement is what QM does that classical can’t do directly (can in sim, of course). Everything else is just funny force laws.
No, it doesn’t. I addressed the ion current nature of nerve action potentials.
Birds’ directional sensing couples to such a system but is not made of it.
Then the discussion should be about the amount of computations required, not about classical vs quantum.
It’s not possible to discuss “the amount of computations required” without specifying a model of computation. Chris is asking whether an AI might be much slower on a classical computer than a quantum computer, to the extent that it’s practically infeasible unless large scale quantum computing is feasible. This is a perfectly reasonable question to ask and I think your objection must be due to an over-literal interpretation of his post title or some other misunderstanding.
I agree, there are more steps in between “AI is hard” and “we need QC”.
However, from what I understand, those who say “QC is required for AI” just use this “argument” (e.g. “AI is at least as hard as code breaking”) as an excuse to avoid thinking about AI, not as a thoughtful conclusion from analyzing available data.