There are a lot of things we simply don’t know about the brain, and even less so about consciousness and intelligence in the human sense. In many ways, I don’t think we even have the right words to talk about this. Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain. Thus it’s conceivable that a whole-brain emulation at the level of individual neurons might be insufficient to produce human-type intelligence and consciousness. If so, we’d need quite a few more generations of Moore’s law before we could expect to finish a whole brain emulation than we’re currently estimating.
Furthermore the smaller structures would be more susceptible to quantum effects. Then again, maybe not. Roger Penrose and Stuart Hameroff have developed this as Orchestrated objective reduction theory. This theory has been hotly disputed; but so far I don’t think it’s been conclusively proven or disproven. However, it is experimentally testable and falsifiable. I suspect it’s too early to claim definitively either that quantum effects either are or are not required for human type intelligence and consciousness; but more research will likely help us answer this question one way or the other.
I will say this: there is a lot of bad physics and philosophy out there that has been misled by bad popular descriptions of quantum mechanics and how the conscious observer collapses the wave function, and thus came to the conclusion that consciousness is intimately tied up with quantum mechanics. I feel safe ruling that much out. However it still seems possible that our consciousness and intelligence is routinely or occasionally susceptible to quantum randomness, depending on the scale at which it operates.
Even if Penrose’s ideas about how human intelligence arises from quantum effects is all true, that still does not prove that all intelligence requires quantum randomness. If you want to answer that question, then the first thing you need to do is define what you mean by “intelligence”. That’s trickier than it sounds at first, but I think it can be usefully done. In fact, there are multiple possible definitions of intelligence useful for different purposes. For instance one is the ability to formulate plans that enable one to achieve a goal. Consciousness is a much thornier nut to crack. I don’t know that anyone has a good handle on that yet.
Skimming the article you linked, it looks like Penrose believes human mathematical intuition comes from quantum-gravitational effects. So on Penrose’s view it might be possible that AGI requires a quantum-gravitational hypercomputer, not just a quantum computer.
Note that according to Scott Aaronson (in his recent book), Penrose thinks that human minds can solve the Halting problem and conjectures that humans can even solve the Halting problem for machines with access to a Halting oracle.
Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.
Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don’t have any money right now to propose a bet, but if it turns out that the brain can’t be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat.
Consciousness is a much thornier nut to crack. I don’t know that anyone has a good handle on that yet.
Daniel Dennet’s papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.
“More susceptible” is not the same as “susceptible”. If it’s bigger than an atom, we don’t need to take quantum effects into account to get a good approximation, and moreover any effects that do happen are going to be very small and won’t affect consciousness in a relevant way (since we don’t experience random changes to consciousness from small effects). There’s no need to accurately model the brain to perfect detail, just to roughly model it, which almost certainly does not involve quantum effects at all.
Incidentally, there’s nothing special about quantum randomness. Why should consciousness be related to splitting worlds in a special way? Once you drop the observer-focused interpretations, there’s nothing related between them. If the brain needs randomness there are easier sources.
There are a lot of things we simply don’t know about the brain, and even less so about consciousness and intelligence in the human sense. In many ways, I don’t think we even have the right words to talk about this. Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain. Thus it’s conceivable that a whole-brain emulation at the level of individual neurons might be insufficient to produce human-type intelligence and consciousness. If so, we’d need quite a few more generations of Moore’s law before we could expect to finish a whole brain emulation than we’re currently estimating.
Furthermore the smaller structures would be more susceptible to quantum effects. Then again, maybe not. Roger Penrose and Stuart Hameroff have developed this as Orchestrated objective reduction theory. This theory has been hotly disputed; but so far I don’t think it’s been conclusively proven or disproven. However, it is experimentally testable and falsifiable. I suspect it’s too early to claim definitively either that quantum effects either are or are not required for human type intelligence and consciousness; but more research will likely help us answer this question one way or the other.
I will say this: there is a lot of bad physics and philosophy out there that has been misled by bad popular descriptions of quantum mechanics and how the conscious observer collapses the wave function, and thus came to the conclusion that consciousness is intimately tied up with quantum mechanics. I feel safe ruling that much out. However it still seems possible that our consciousness and intelligence is routinely or occasionally susceptible to quantum randomness, depending on the scale at which it operates.
Even if Penrose’s ideas about how human intelligence arises from quantum effects is all true, that still does not prove that all intelligence requires quantum randomness. If you want to answer that question, then the first thing you need to do is define what you mean by “intelligence”. That’s trickier than it sounds at first, but I think it can be usefully done. In fact, there are multiple possible definitions of intelligence useful for different purposes. For instance one is the ability to formulate plans that enable one to achieve a goal. Consciousness is a much thornier nut to crack. I don’t know that anyone has a good handle on that yet.
Skimming the article you linked, it looks like Penrose believes human mathematical intuition comes from quantum-gravitational effects. So on Penrose’s view it might be possible that AGI requires a quantum-gravitational hypercomputer, not just a quantum computer.
Note that according to Scott Aaronson (in his recent book), Penrose thinks that human minds can solve the Halting problem and conjectures that humans can even solve the Halting problem for machines with access to a Halting oracle.
Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don’t have any money right now to propose a bet, but if it turns out that the brain can’t be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat.
Daniel Dennet’s papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.
If you find someone to bet against you, I’m willing to eat half the hat.
We could split it three ways, provided agreeing in principle despite doubting that an actual complete human brain will ever be simulated counts.
“More susceptible” is not the same as “susceptible”. If it’s bigger than an atom, we don’t need to take quantum effects into account to get a good approximation, and moreover any effects that do happen are going to be very small and won’t affect consciousness in a relevant way (since we don’t experience random changes to consciousness from small effects). There’s no need to accurately model the brain to perfect detail, just to roughly model it, which almost certainly does not involve quantum effects at all.
Incidentally, there’s nothing special about quantum randomness. Why should consciousness be related to splitting worlds in a special way? Once you drop the observer-focused interpretations, there’s nothing related between them. If the brain needs randomness there are easier sources.