Hi Capla—no that is not what Godel’s theorem says (actually there are two incompleteness theorems)
1) Godel’s theorems don’t talk about what is knowable—only about what is (formally) provable in a mathematical or logic sense
2) The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an any sort of algorithm is capable of proving all truths about the relations of the natural numbers. In other words for any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.
3) This doesn’t mean that some things can never be proven—although it provides some challenges—it does mean that we cannot create a consistent system (within itself) that can demonstrate or prove (algorithmically) all things that are true for that system
This creates some significant challenges for AI and consciousness—but perhaps not insurmountable ones.
For example—as far as i know—Godel’s theorem rests on classical logic. Quantum logic—where something can be both “true” and “not true” at the same time may provide some different outcomes
Regarding consciousness—I think I would agree with the thrust of this post—that we cannot yet fully explain or reproduce consciousness (hell we have trouble defining it) does not mean that it will forever be beyond reach. Consciousness is only mysterious because of our lack of knowledge of it
we are starting to unravel how some of the mechanisms by which consciousness emerges from the brain—since consciousness appears to be process phenomena rather rather than a physical property
My issue with consciousness involves p-zombies. Any experiment that wanted to understand consciousness, would have to be able to detect it, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see if consciousness is present or not, depending on the manipulated variable. We assume that those around us are conscious, and we have good reason to do so, but we can’t rely on that assumption in any experiment in which we are investigating consciousness.
As Eliezer points out, that an individual says he’s conscious is a pretty good signal of consciousness, but we can’t necessarily rely on that signal for non-human minds. A conscious AI may never talk about it’s internal states depending on its structure (humans have a survival advantage to social sharing of internal realities). On the flip side, a savvy but non-conscious AI, may talk about it’s “internal states” because it is guessing the teacher’s password in the realist way imaginable: it has no understanding whatsoever of what those state are, but computes that aping them will accomplish it’s goals. I don’t know how we could possibly know if the AI is aping conciseness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can’t see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic” and that ever single time something has seemed inscrutable to science, a reductionist explanation eventually, surfaced. Knowing this, I have to seriously down grade my confidence that “No, really, this time it is different. Science really can’t pierce this veil.” I look forward to someone coming forward with somthign clever that dissolves the question, but even so, it does seem inscrutable.
Hi Capla—no that is not what Godel’s theorem says (actually there are two incompleteness theorems)
1) Godel’s theorems don’t talk about what is knowable—only about what is (formally) provable in a mathematical or logic sense
2) The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an any sort of algorithm is capable of proving all truths about the relations of the natural numbers. In other words for any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.
3) This doesn’t mean that some things can never be proven—although it provides some challenges—it does mean that we cannot create a consistent system (within itself) that can demonstrate or prove (algorithmically) all things that are true for that system
This creates some significant challenges for AI and consciousness—but perhaps not insurmountable ones.
For example—as far as i know—Godel’s theorem rests on classical logic. Quantum logic—where something can be both “true” and “not true” at the same time may provide some different outcomes
Regarding consciousness—I think I would agree with the thrust of this post—that we cannot yet fully explain or reproduce consciousness (hell we have trouble defining it) does not mean that it will forever be beyond reach. Consciousness is only mysterious because of our lack of knowledge of it
And we are learning more all the time
http://www.ted.com/talks/nancy_kanwisher_the_brain_is_a_swiss_army_knife? http://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness?
we are starting to unravel how some of the mechanisms by which consciousness emerges from the brain—since consciousness appears to be process phenomena rather rather than a physical property
Thank you, A little bit more informed.
My issue with consciousness involves p-zombies. Any experiment that wanted to understand consciousness, would have to be able to detect it, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see if consciousness is present or not, depending on the manipulated variable. We assume that those around us are conscious, and we have good reason to do so, but we can’t rely on that assumption in any experiment in which we are investigating consciousness.
As Eliezer points out, that an individual says he’s conscious is a pretty good signal of consciousness, but we can’t necessarily rely on that signal for non-human minds. A conscious AI may never talk about it’s internal states depending on its structure (humans have a survival advantage to social sharing of internal realities). On the flip side, a savvy but non-conscious AI, may talk about it’s “internal states” because it is guessing the teacher’s password in the realist way imaginable: it has no understanding whatsoever of what those state are, but computes that aping them will accomplish it’s goals. I don’t know how we could possibly know if the AI is aping conciseness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can’t see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic” and that ever single time something has seemed inscrutable to science, a reductionist explanation eventually, surfaced. Knowing this, I have to seriously down grade my confidence that “No, really, this time it is different. Science really can’t pierce this veil.” I look forward to someone coming forward with somthign clever that dissolves the question, but even so, it does seem inscrutable.