Oh, the two guidelines I suggested certainly aren’t a complete algorithm—that’s why I called them guidelines not rules :-) Maybe I should list a third (or first) guideline:
Is this claim extraordinary; does it contradict what we think we know about how the world works?
The Singularity definitely falls into this category; the idea that you can handwave that sort of capability into existence is contrary to everything we know about science and engineering that nothing useful happens for free and every optimization needs real-world feedback; and when you look at the details of the Singularitarian arguments, there are an awful lot of gaps of the “and then a miracle occurs” variety.
Fusion reactors are fundamentally plausible because they match both our knowledge of nuclear physics and our experiences building better engines. Interestingly, I’ve seen it credibly suggested that fusion reactors of the kind we are currently trying to build won’t work out after all, because we are trying to make them too small, so the heat radiates away too quickly, so it will cost more to run the reactor than the value of the energy generated, and we need to either change our plans or make the reactors a lot bigger. But even if true, that’s not something that could possibly have been predicted without in-depth study of the subject matter.
We may need to break down which form of the Singularity we are then discussing. See Eliezer’s list. I agree that a pure optimization process based on no connection to the real world seems unlikely. But if for example, the general AI came along at about the same time as access to marginally efficient nanotech came around, that allows a plausible method of optimization. Or to use another example, if we construct a reasonably smart general AI and it turns out that it actually requires very little comparative processing power to what we have available at the time. Either of these allow for very efficient optimization processes. Nothing in the Singularity notion goes against the fundamental picture of the world we’ve developed in the same way that say psychic powers would.
If I had to make a continuum I’d put them something in order of plausibility like:
[psychic powers, alien UFOs, Kurzweil-type Singularity, Yudkowskian Singularity, practical fusion power, space elevators] and there’s a major gap between alien UFOs and K-type Singularity. I’m not sure what would plausibly go in between them to narrow the gap. Maybe something like a Penrose version of consciousness?
Right, in truth none of the three versions really hangs together when you look at the arguments, though they are listed in decreasing order of plausibility.
“Our intuitions about change are linear” -- no they aren’t, we attach equal significance to equal percentage changes, so our intuition expects steady exponential change.
“Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.”—artificial intelligence, along with flying cars, moon bases and a cure for cancer, refutes this idea by its continued nonexistence.
“To know what a superhuman intelligence would do, you would have to be at least that smart yourself.”—my brother’s cat can predict that when it meows, he will put out food for it. He cannot predict whether the cat will eat the food.
“Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.”—the future has always been unpredictable, so by that definition we have always been in the Singularity.
“each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude”—knowing whether a change is actually an improvement takes more than just thinking about it.
“Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons.”—technological progress is much slower than the characteristic timescale of neurons.
That doesn’t mean the Singularity can’t exist by some other definition,
“For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down.”
but as Eliezer also points out, this definition does not imply any particular conclusions.
The Penrose version of consciousness is an interesting case. It is clearly something Penrose would be disposed to believe even if it were false (he pretty much says so in The Emperor’s New Mind) and we have no way to disprove it. Is it an extraordinary claim? I would be inclined to say so, but there might be room for reasonable disagreement on that. So while I think it is false, I’m not sure I would be confident dismissing it as bunk.
Thinking about it a bit more, I wonder if my greater confidence in dismissing the Singularity than Penrose’s theory of consciousness as bunk, is influenced by the fact that the former is in my area of expertise and the latter is not. Obviously the more we know about something, the easier it is to be confident, but the original topic was possible methods of making summary judgment without detailed knowledge (given the impossibility of knowing all the details of everything).
Are there any physicists or neuroscientists in the audience who would be more confident in dismissing Penrose’s theory of consciousness?
I spent a year as a guest of Penrose’s biologist collaborator, Stuart Hameroff, at the University of Arizona, and my one peer-reviewed publication dates from that time, so I can tell you more than you want to know about this subject. :-)
First you should understand the order of events. Penrose published his book arguing that there should be a trans-Turing quantum-gravity process happening in the brain. Then Hameroff wrote to him and said, I bet it’s happening in the microtubules. Thus was born the version of the idea that most people hear about.
Penrose’s original argument combines an old interpretation of Gödel’s theorem with his own speculations about quantum gravity. The first part goes like this: For any mechanized form of mathematical reasoning, there are, necessarily, mathematical truths which it cannot prove. But we can know these propositions to be true. Therefore, human cognition must have capabilities which are not Turing-computable.
In the second part, Penrose observes that the whole of nongravitational physics is Turing-computable, but that gravitational physics is at least potentially not, because it may involve quantum sums over arbitrary 4-manifolds, and topological equivalence of 4-manifolds is not Turing-decidable. He also introduces one of his own physical ideas: Hawking evaporation of black holes appears to involve destruction of quantum information, so he proposes that conservation of probability flow is maintained by nondeterministic wavefunction collapse, which creates quantum information. He also has a technical argument against the possibility of superpositions of different geometries. So, if there are mesoscopic quantum superpositions in the brain whose components evolve towards mass distributions (and hence local space-time geometries) sufficiently different from each other that the superposition must break down, then, there is an opportunity for trans-Turing physical dynamics to play a role in human cognition.
The physical argument is very ingenious but probably wrong in two out of three places. But first, how about the prior argument using Gödel? There are two key considerations here.
Firstly, the true propositions which a formal system cannot itself prove can be proven, if you know the interpretation of the formalism, and if you know the axioms to be true and the methods of inference valid under that interpretation. In other words, knowing the semantics of the system is what allows you to construct the undecidable propositions and have an opinion about their truth. The logician Solomon Feferman has shown that if you have an extra logical primitive, “logical reflection”, which amounts to accessing this information about meanings, then there are no undecidable propositions. The combination of a valid formal system and indefinitely iterated logical reflection gets you everything.
Secondly, this makes it plain that there is a connection between the Penrose-Gödel argument, and John Searle’s problem regarding the semantics of computational states. If a thought is actually a brain state, what is it about that brain state that makes it a thought about one thing rather than another? Penrose doesn’t address this issue, yet Feferman’s analysis makes it clear that it’s metacognition or reflective cognition about meanings which produces Gödelian insights.
It is possible to attack Penrose’s ultimate conclusion by saying there’s no empirical evidence that humans can engage in logical reflection of arbitrary order. (The higher iterations of logical reflection correspond to transfinite ordinals, because they involve induction over infinite axiom sets.) If humans can only logically reflect up to order N, then a formal system of order N+1 should be capable of equaling the human ability to reason. But really, the conclusion I draw is that we will see no end to this particular dispute until we understand how neurocomputational semantics works. Until then, we simply can’t offer a neurocognitive account of advanced mathematical reasoning.
As for the physical arguments, I try to judge them from the perspective of string theory. The bit about sums over arbitrary 4-manifolds might be true; string theory is a work in progress, like most particle physics theories it’s known and used only in an approximate form, and this is a level of detail which presently is neither used nor understood. On the other hand, black hole evaporation is a unitary process in string theory, so the ingenious idea of wavefunction collapse balancing quantum information loss loses its motivation. As for the technical argument about geometric superpositions, that only applies if you think superpositions are objective physical states rather than generalized probability distributions. If you take the latter view, the argument loses its potency.
Now, microtubules. My grasp of molecular neuroscience is a whole lot less than my grasp of physics, but it’s definitely true that neuronal microtubules are not thought to play much of a role in cognition or consciousness. Microtubules are a dynamic structural organelle. They are scaffolding for the transport of vesicles, they move the chromosomes around during cell division, they are involved in pseudopod extrusion and cell motility. They occur in all your cells, not just neurons. Because a neuron is just another cell, but one which has specifically been shaped to perform an information-processing function, it’s not surprising that microtubules are involved in the execution of that function. But everything known suggests it’s a peripheral involvement.
I ended up in Arizona because I had my own reasons for being interested in quantum brain theories. And I’ll say this much in favor of microtubules: if you are looking for a molecular structure in the brain which might contain long-lived quantum states, the microtubule is a great candidate. It gives you a two-dimensional space (a cylinder) protected from environmental interaction by the tails of the tubulins. A lot of cool quantum things can happen in two dimensions. The problem is, how would it be relevant to anything cognitive?
Penrose and Hameroff wrote some papers applying Penrose’s quantum-gravity collapse model to microtubules. I don’t believe those calculations apply to reality. I’ve also mentioned why, even if you could show that quantum coherence does exist in the microtubule, that doesn’t yet connect it to conscious cognition. But I will still put in a word for Penrose’s original conception of quantum-gravitational dynamics maybe playing a part in the physics of cognition.
If one does wish to suppose—as I do—that the neural correlate of consciousness is actually a quantum state of some brain subsystem, rather than a coarse-grained classical computational state; if one does suppose that the manifest attributes of conscious experience are to be identified with fundamental degrees of freedom in that quantum object; then it is logical to suppose that some of those degrees of freedom are what we would call, from a physical perspective, gravitational, and that they might even be dynamically relevant. The idea that Feferman’s operation of conscious logical reflection is computationally implemented by a gravitational subalgebra of the full set of physical transformations of state… that’s my version of Penrose’s idea. I certainly don’t regard it as a logical necessity; it’s just a stimulating hypothesis. I look forward to the day when we know enough that I can actually rule it in or out.
Excellent explanation, thanks! So if I’m understanding correctly, while there are severe problems with Penrose’s theory, it’s not in the category of things to be casually dismissed as bunk; experts have found it an interesting line of thought to investigate, at least.
You may be putting to much emphasis on what people would be predisposed to believe. While when evaluating our own probability estimates we should correct for our emotional predispositions, it in no way says anything substantive about whether a given claim is correct or not. Tendencies to distort my map in no way impacts what the territory actually looks like.
Sure, at the end of the day there is no reliable way to tell truth from falsehood except by thorough scientific investigation.
But the topic at hand is whether, in the absence of the time or other resources to investigate everything, there are guidelines that will do better than random chance in telling us what’s promising enough to be worth how much investigation.
While the heuristic about predisposition to believe falls far short of certainty, I put it to you that it is significantly better than random chance—that in the absence of any other way to distinguish true claims from false ones, you would do quite a bit better by using that heuristic, than by flipping a coin.
Oh, the two guidelines I suggested certainly aren’t a complete algorithm—that’s why I called them guidelines not rules :-) Maybe I should list a third (or first) guideline:
Is this claim extraordinary; does it contradict what we think we know about how the world works?
The Singularity definitely falls into this category; the idea that you can handwave that sort of capability into existence is contrary to everything we know about science and engineering that nothing useful happens for free and every optimization needs real-world feedback; and when you look at the details of the Singularitarian arguments, there are an awful lot of gaps of the “and then a miracle occurs” variety.
Fusion reactors are fundamentally plausible because they match both our knowledge of nuclear physics and our experiences building better engines. Interestingly, I’ve seen it credibly suggested that fusion reactors of the kind we are currently trying to build won’t work out after all, because we are trying to make them too small, so the heat radiates away too quickly, so it will cost more to run the reactor than the value of the energy generated, and we need to either change our plans or make the reactors a lot bigger. But even if true, that’s not something that could possibly have been predicted without in-depth study of the subject matter.
We may need to break down which form of the Singularity we are then discussing. See Eliezer’s list. I agree that a pure optimization process based on no connection to the real world seems unlikely. But if for example, the general AI came along at about the same time as access to marginally efficient nanotech came around, that allows a plausible method of optimization. Or to use another example, if we construct a reasonably smart general AI and it turns out that it actually requires very little comparative processing power to what we have available at the time. Either of these allow for very efficient optimization processes. Nothing in the Singularity notion goes against the fundamental picture of the world we’ve developed in the same way that say psychic powers would.
If I had to make a continuum I’d put them something in order of plausibility like:
[psychic powers, alien UFOs, Kurzweil-type Singularity, Yudkowskian Singularity, practical fusion power, space elevators] and there’s a major gap between alien UFOs and K-type Singularity. I’m not sure what would plausibly go in between them to narrow the gap. Maybe something like a Penrose version of consciousness?
Right, in truth none of the three versions really hangs together when you look at the arguments, though they are listed in decreasing order of plausibility.
“Our intuitions about change are linear” -- no they aren’t, we attach equal significance to equal percentage changes, so our intuition expects steady exponential change.
“Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.”—artificial intelligence, along with flying cars, moon bases and a cure for cancer, refutes this idea by its continued nonexistence.
“To know what a superhuman intelligence would do, you would have to be at least that smart yourself.”—my brother’s cat can predict that when it meows, he will put out food for it. He cannot predict whether the cat will eat the food.
“Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.”—the future has always been unpredictable, so by that definition we have always been in the Singularity.
“each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude”—knowing whether a change is actually an improvement takes more than just thinking about it.
“Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons.”—technological progress is much slower than the characteristic timescale of neurons.
That doesn’t mean the Singularity can’t exist by some other definition,
“For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down.”
but as Eliezer also points out, this definition does not imply any particular conclusions.
The Penrose version of consciousness is an interesting case. It is clearly something Penrose would be disposed to believe even if it were false (he pretty much says so in The Emperor’s New Mind) and we have no way to disprove it. Is it an extraordinary claim? I would be inclined to say so, but there might be room for reasonable disagreement on that. So while I think it is false, I’m not sure I would be confident dismissing it as bunk.
Thinking about it a bit more, I wonder if my greater confidence in dismissing the Singularity than Penrose’s theory of consciousness as bunk, is influenced by the fact that the former is in my area of expertise and the latter is not. Obviously the more we know about something, the easier it is to be confident, but the original topic was possible methods of making summary judgment without detailed knowledge (given the impossibility of knowing all the details of everything).
Are there any physicists or neuroscientists in the audience who would be more confident in dismissing Penrose’s theory of consciousness?
I spent a year as a guest of Penrose’s biologist collaborator, Stuart Hameroff, at the University of Arizona, and my one peer-reviewed publication dates from that time, so I can tell you more than you want to know about this subject. :-)
First you should understand the order of events. Penrose published his book arguing that there should be a trans-Turing quantum-gravity process happening in the brain. Then Hameroff wrote to him and said, I bet it’s happening in the microtubules. Thus was born the version of the idea that most people hear about.
Penrose’s original argument combines an old interpretation of Gödel’s theorem with his own speculations about quantum gravity. The first part goes like this: For any mechanized form of mathematical reasoning, there are, necessarily, mathematical truths which it cannot prove. But we can know these propositions to be true. Therefore, human cognition must have capabilities which are not Turing-computable.
In the second part, Penrose observes that the whole of nongravitational physics is Turing-computable, but that gravitational physics is at least potentially not, because it may involve quantum sums over arbitrary 4-manifolds, and topological equivalence of 4-manifolds is not Turing-decidable. He also introduces one of his own physical ideas: Hawking evaporation of black holes appears to involve destruction of quantum information, so he proposes that conservation of probability flow is maintained by nondeterministic wavefunction collapse, which creates quantum information. He also has a technical argument against the possibility of superpositions of different geometries. So, if there are mesoscopic quantum superpositions in the brain whose components evolve towards mass distributions (and hence local space-time geometries) sufficiently different from each other that the superposition must break down, then, there is an opportunity for trans-Turing physical dynamics to play a role in human cognition.
The physical argument is very ingenious but probably wrong in two out of three places. But first, how about the prior argument using Gödel? There are two key considerations here.
Firstly, the true propositions which a formal system cannot itself prove can be proven, if you know the interpretation of the formalism, and if you know the axioms to be true and the methods of inference valid under that interpretation. In other words, knowing the semantics of the system is what allows you to construct the undecidable propositions and have an opinion about their truth. The logician Solomon Feferman has shown that if you have an extra logical primitive, “logical reflection”, which amounts to accessing this information about meanings, then there are no undecidable propositions. The combination of a valid formal system and indefinitely iterated logical reflection gets you everything.
Secondly, this makes it plain that there is a connection between the Penrose-Gödel argument, and John Searle’s problem regarding the semantics of computational states. If a thought is actually a brain state, what is it about that brain state that makes it a thought about one thing rather than another? Penrose doesn’t address this issue, yet Feferman’s analysis makes it clear that it’s metacognition or reflective cognition about meanings which produces Gödelian insights.
It is possible to attack Penrose’s ultimate conclusion by saying there’s no empirical evidence that humans can engage in logical reflection of arbitrary order. (The higher iterations of logical reflection correspond to transfinite ordinals, because they involve induction over infinite axiom sets.) If humans can only logically reflect up to order N, then a formal system of order N+1 should be capable of equaling the human ability to reason. But really, the conclusion I draw is that we will see no end to this particular dispute until we understand how neurocomputational semantics works. Until then, we simply can’t offer a neurocognitive account of advanced mathematical reasoning.
As for the physical arguments, I try to judge them from the perspective of string theory. The bit about sums over arbitrary 4-manifolds might be true; string theory is a work in progress, like most particle physics theories it’s known and used only in an approximate form, and this is a level of detail which presently is neither used nor understood. On the other hand, black hole evaporation is a unitary process in string theory, so the ingenious idea of wavefunction collapse balancing quantum information loss loses its motivation. As for the technical argument about geometric superpositions, that only applies if you think superpositions are objective physical states rather than generalized probability distributions. If you take the latter view, the argument loses its potency.
Now, microtubules. My grasp of molecular neuroscience is a whole lot less than my grasp of physics, but it’s definitely true that neuronal microtubules are not thought to play much of a role in cognition or consciousness. Microtubules are a dynamic structural organelle. They are scaffolding for the transport of vesicles, they move the chromosomes around during cell division, they are involved in pseudopod extrusion and cell motility. They occur in all your cells, not just neurons. Because a neuron is just another cell, but one which has specifically been shaped to perform an information-processing function, it’s not surprising that microtubules are involved in the execution of that function. But everything known suggests it’s a peripheral involvement.
I ended up in Arizona because I had my own reasons for being interested in quantum brain theories. And I’ll say this much in favor of microtubules: if you are looking for a molecular structure in the brain which might contain long-lived quantum states, the microtubule is a great candidate. It gives you a two-dimensional space (a cylinder) protected from environmental interaction by the tails of the tubulins. A lot of cool quantum things can happen in two dimensions. The problem is, how would it be relevant to anything cognitive?
Penrose and Hameroff wrote some papers applying Penrose’s quantum-gravity collapse model to microtubules. I don’t believe those calculations apply to reality. I’ve also mentioned why, even if you could show that quantum coherence does exist in the microtubule, that doesn’t yet connect it to conscious cognition. But I will still put in a word for Penrose’s original conception of quantum-gravitational dynamics maybe playing a part in the physics of cognition.
If one does wish to suppose—as I do—that the neural correlate of consciousness is actually a quantum state of some brain subsystem, rather than a coarse-grained classical computational state; if one does suppose that the manifest attributes of conscious experience are to be identified with fundamental degrees of freedom in that quantum object; then it is logical to suppose that some of those degrees of freedom are what we would call, from a physical perspective, gravitational, and that they might even be dynamically relevant. The idea that Feferman’s operation of conscious logical reflection is computationally implemented by a gravitational subalgebra of the full set of physical transformations of state… that’s my version of Penrose’s idea. I certainly don’t regard it as a logical necessity; it’s just a stimulating hypothesis. I look forward to the day when we know enough that I can actually rule it in or out.
Excellent explanation, thanks! So if I’m understanding correctly, while there are severe problems with Penrose’s theory, it’s not in the category of things to be casually dismissed as bunk; experts have found it an interesting line of thought to investigate, at least.
You may be putting to much emphasis on what people would be predisposed to believe. While when evaluating our own probability estimates we should correct for our emotional predispositions, it in no way says anything substantive about whether a given claim is correct or not. Tendencies to distort my map in no way impacts what the territory actually looks like.
Sure, at the end of the day there is no reliable way to tell truth from falsehood except by thorough scientific investigation.
But the topic at hand is whether, in the absence of the time or other resources to investigate everything, there are guidelines that will do better than random chance in telling us what’s promising enough to be worth how much investigation.
While the heuristic about predisposition to believe falls far short of certainty, I put it to you that it is significantly better than random chance—that in the absence of any other way to distinguish true claims from false ones, you would do quite a bit better by using that heuristic, than by flipping a coin.