I’d like to ask him if it would be possible to frame the hard problem in terms of computer science (e.g. in terms of a cellular automaton), ideally coming up with a mathematical description of the problem.
If Turing-completeness is insufficient to compute consciousness then it should be perfectly possible to pinpoint where computer science breaks down.
ideally coming up with a mathematical description of the problem.
That would be like asking for a mathematical description of the problem “why is there something rather than nothing?”
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods. So if it can’t be turned into a program, or a Bayesian formula, or..., it’s deemed to be meaningless.
But every formalism starts life as an ontology. Before we “formalized” logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers… The quest for a technical statement of philosophical hard problems often amounts to an evasion of a real ontological problem that underlies or transcends the formalism or discipline of choice. XiXiDu, you don’t strike me as someone who would deliberately do this, so maybe you’re just being a little naive—you want to think rigorously, so you’re reaching for a familiar model of rigor. But the really hard questions are characterized by the fact that we don’t know how to think rigorously about them—we don’t have a method, ready at hand, which allows us to mechanically compute the answer. There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question, and you will be investigating how rigor and method was introduced where previously it did not exist. That is the level at which “hard problems” live.
The combination of computationalism and physicalism has become a really potent ossifier of thought, because it combines the rule-following of formalism with the empirical relevance of physics. “We know it’s all atoms, so if we can reduce it to atoms we’re done, and neurocomputationalism means we can focus on explaining why a question was asked, rather than on engaging with its content”—that’s how this particular reductionism works.
There must be a Zen-like art to reawakening fresh perception of reality in individuals attached to particular formalisms, formulas, and abstractions, but it would require considerable skill, because you have to enter into the formalism while retaining awareness of the ontological context it supposedly represents: you have to reach the heart of the conceptual labyrinth where the reifier of abstractions is located, and then lead them out, so they can see directly again the roots in reality of their favorite constructs, and thereby also see the aspects of reality that aren’t represented in the formalism, but which are just as real as those which are.
If Turing-completeness is insufficient to compute consciousness then it should be perfectly possible to pinpoint where computer science breaks down.
But that isn’t the problem. Chalmers never asserts that you can’t simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world. The question is why it feels like something to be what we are: why is there any awareness. (There are, again, ways to evade the question here, e.g. by defining awareness behavioristically.)
There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question, and you will be investigating how rigor and method was introduced where previously it did not exist.
History of concept of computation seems very analogous to development of concept of justification. I think we’re at roughly Leibniz stage of figuring out justification. (I sorta wanna write up a thorough analysis of this somewhere.)
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods.
I tried to ask a question that best fits this community. The answer to it would be interesting even if it is a wrong question.
Besides, I am not joking when I say that I haven’t thought about the whole issue. It does simply not have any priority at this point because it is a very complex issue and I still have to learn other things first. I admit my ignorance here. Yet I used the chance to indirectly ask one of the leading experts in the field to answer a question that I perceived to suit the Less Wrong community.
So if it can’t be turned into a program, or a Bayesian formula, or..., it’s deemed to be meaningless.
I don’t think that way at all. I think that it is a fascinating possibility and I am very glad that people like you take it seriously and encourage you to keep it up and to not let yourself be discouraged by negative reactions.
Yet I don’t know what it would mean for a problem to be algorithmically or mathematically undefinable. I can only rely on my intuition here and admit that I feel that there are such problems. But that’s it.
But every formalism starts life as an ontology.
You pretty much lost me at ontology. All I really know about the term “ontology” is the Wikipedia abstract (I haven’t read your LW posts on the topic either).
Please just stop here if I am wasting your time. I really didn’t do the necessary reading yet.
To better be able to fathom what you are talking about, here are three points you might or might not agree with:
1)“Experiences like “green” are ontologically basic, are not reducible to interacting nonmental parts.”
Well, I don’t know what “mental” denotes so I am unable to discern it from that which is “nonmental”.
Isn’t the reason that we perceive “green” instead of “particle interactions” the same that we don’t perceive the earth to be a sphere? Due to a lack of information and our relation to earth we perceive it to be round from afar and flat at a close-up range.
If you view “green” as an approximation then the fact that there are “subjects” that experience “green” can be deduced from fundamental limitations in the propagation and computation of information.
It is simply impossible for a human being to view a sphere. We are only able to deduce it. Does that mean that flatness of earth is ontologically basic, because it does not follow from physics? Well, it does.
2)“If consciousness is the spatial arrangement of particles, then you ought to be able to build a conscious machine in the Game of Life. It is Turing-complete after all, and most physicalists insist that consciousness is computable as well.
But I find that absurd. Sure you can get complex patterns, but how would these patterns share information? Individual cells aren’t aware of whatever pattern they’re in. If there is awareness, it should either cover only individual cells (no unified field of consciousness, only pixel-like awareness) or cover all cells—panpsychism, which isn’t true either. It would be really weird to look at the board and say, this 10000x10000 section of the board is consciousness, but the 10x10 section to the left isn’t, nor is this 10k x 10k section of noise.
Where is the information coming from that allows someone to make this judgment about consciousness? It doesn’t seem to be in the board. So if you stick to the automaton, you have only two options.”
The above has been written by user:muflax in a reply on Google+. I don’t know enough about cellular automatons but it sure sounds like a convincing argument that Turing-completeness is insufficient.
How is what I wrote there related to the overall problem, if at all, and do you agree?
Before we “formalized” logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers...
I am not sure that truth, reasoning or numbers really exist in any meaningful sense because I don’t know what it means for something to “exist”. There are no borders out there.
There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question...
I don’t like the word “invented” very much. I think that everything is discovered. And the reason for the discovery is that on the level that we reside, given that we all have similar computationally limits, the world appears to have distinguishable properties that can be labeled by the use of shapes. But that’s simply a result of our limitations rather than hinting at something more fundamental.
The question is why it feels like something to be what we are: why is there any awareness.
Is the human mind a unity? Different parts seem to observe each other. What you call an “experience” might be the interactive inspection of conditioned data. A sort of hierarchical computation that decreases quickly. Your brain might have a strong experience of green but a weaker experience that you are experiencing green. That you have an experience of the experience of the experience of green is an induction that completely replaces the original experience of green and is observed instead.
I have this vision of consciousness that is similar to two cameras behind semi-transparent mirrors facing each other, fainting as all computational resources are being exhausted.
I don’t know how this could possible work out given a cellular automaton though.
Chalmers never asserts that you can’t simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world.
Wow okay, I seem to confuse him with someone else.
I’d like to ask him if it would be possible to frame the hard problem in terms of computer science (e.g. in terms of a cellular automaton), ideally coming up with a mathematical description of the problem.
If Turing-completeness is insufficient to compute consciousness then it should be perfectly possible to pinpoint where computer science breaks down.
That would be like asking for a mathematical description of the problem “why is there something rather than nothing?”
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods. So if it can’t be turned into a program, or a Bayesian formula, or..., it’s deemed to be meaningless.
But every formalism starts life as an ontology. Before we “formalized” logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers… The quest for a technical statement of philosophical hard problems often amounts to an evasion of a real ontological problem that underlies or transcends the formalism or discipline of choice. XiXiDu, you don’t strike me as someone who would deliberately do this, so maybe you’re just being a little naive—you want to think rigorously, so you’re reaching for a familiar model of rigor. But the really hard questions are characterized by the fact that we don’t know how to think rigorously about them—we don’t have a method, ready at hand, which allows us to mechanically compute the answer. There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question, and you will be investigating how rigor and method was introduced where previously it did not exist. That is the level at which “hard problems” live.
The combination of computationalism and physicalism has become a really potent ossifier of thought, because it combines the rule-following of formalism with the empirical relevance of physics. “We know it’s all atoms, so if we can reduce it to atoms we’re done, and neurocomputationalism means we can focus on explaining why a question was asked, rather than on engaging with its content”—that’s how this particular reductionism works.
There must be a Zen-like art to reawakening fresh perception of reality in individuals attached to particular formalisms, formulas, and abstractions, but it would require considerable skill, because you have to enter into the formalism while retaining awareness of the ontological context it supposedly represents: you have to reach the heart of the conceptual labyrinth where the reifier of abstractions is located, and then lead them out, so they can see directly again the roots in reality of their favorite constructs, and thereby also see the aspects of reality that aren’t represented in the formalism, but which are just as real as those which are.
But that isn’t the problem. Chalmers never asserts that you can’t simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world. The question is why it feels like something to be what we are: why is there any awareness. (There are, again, ways to evade the question here, e.g. by defining awareness behavioristically.)
History of concept of computation seems very analogous to development of concept of justification. I think we’re at roughly Leibniz stage of figuring out justification. (I sorta wanna write up a thorough analysis of this somewhere.)
I tried to ask a question that best fits this community. The answer to it would be interesting even if it is a wrong question.
Besides, I am not joking when I say that I haven’t thought about the whole issue. It does simply not have any priority at this point because it is a very complex issue and I still have to learn other things first. I admit my ignorance here. Yet I used the chance to indirectly ask one of the leading experts in the field to answer a question that I perceived to suit the Less Wrong community.
I don’t think that way at all. I think that it is a fascinating possibility and I am very glad that people like you take it seriously and encourage you to keep it up and to not let yourself be discouraged by negative reactions.
Yet I don’t know what it would mean for a problem to be algorithmically or mathematically undefinable. I can only rely on my intuition here and admit that I feel that there are such problems. But that’s it.
You pretty much lost me at ontology. All I really know about the term “ontology” is the Wikipedia abstract (I haven’t read your LW posts on the topic either).
Please just stop here if I am wasting your time. I really didn’t do the necessary reading yet.
To better be able to fathom what you are talking about, here are three points you might or might not agree with:
1) “Experiences like “green” are ontologically basic, are not reducible to interacting nonmental parts.”
Well, I don’t know what “mental” denotes so I am unable to discern it from that which is “nonmental”.
Isn’t the reason that we perceive “green” instead of “particle interactions” the same that we don’t perceive the earth to be a sphere? Due to a lack of information and our relation to earth we perceive it to be round from afar and flat at a close-up range.
If you view “green” as an approximation then the fact that there are “subjects” that experience “green” can be deduced from fundamental limitations in the propagation and computation of information.
It is simply impossible for a human being to view a sphere. We are only able to deduce it. Does that mean that flatness of earth is ontologically basic, because it does not follow from physics? Well, it does.
2) “If consciousness is the spatial arrangement of particles, then you ought to be able to build a conscious machine in the Game of Life. It is Turing-complete after all, and most physicalists insist that consciousness is computable as well.
But I find that absurd. Sure you can get complex patterns, but how would these patterns share information? Individual cells aren’t aware of whatever pattern they’re in. If there is awareness, it should either cover only individual cells (no unified field of consciousness, only pixel-like awareness) or cover all cells—panpsychism, which isn’t true either. It would be really weird to look at the board and say, this 10000x10000 section of the board is consciousness, but the 10x10 section to the left isn’t, nor is this 10k x 10k section of noise.
Where is the information coming from that allows someone to make this judgment about consciousness? It doesn’t seem to be in the board. So if you stick to the automaton, you have only two options.”
The above has been written by user:muflax in a reply on Google+. I don’t know enough about cellular automatons but it sure sounds like a convincing argument that Turing-completeness is insufficient.
3) See my post here.
How is what I wrote there related to the overall problem, if at all, and do you agree?
I am not sure that truth, reasoning or numbers really exist in any meaningful sense because I don’t know what it means for something to “exist”. There are no borders out there.
I don’t like the word “invented” very much. I think that everything is discovered. And the reason for the discovery is that on the level that we reside, given that we all have similar computationally limits, the world appears to have distinguishable properties that can be labeled by the use of shapes. But that’s simply a result of our limitations rather than hinting at something more fundamental.
Is the human mind a unity? Different parts seem to observe each other. What you call an “experience” might be the interactive inspection of conditioned data. A sort of hierarchical computation that decreases quickly. Your brain might have a strong experience of green but a weaker experience that you are experiencing green. That you have an experience of the experience of the experience of green is an induction that completely replaces the original experience of green and is observed instead.
I have this vision of consciousness that is similar to two cameras behind semi-transparent mirrors facing each other, fainting as all computational resources are being exhausted.
I don’t know how this could possible work out given a cellular automaton though.
Wow okay, I seem to confuse him with someone else.