Maybe my basic point is that there is more to the “stuff” than just “being causal”. This is why I talk about abstracted causal models as ontologically deficient. Describing yourself or the world as a state machine just says that reality is a merry-go-round of “states” which follow each other according to a certain pattern. It says nothing about the nature of those states, except that they follow the pattern. This is why functionalist theories of mind lead to patternist theories of identity.
But it’s clear that what we can see of reality is made of more than just causality. Causal relations are very important constitutive relations, but then we can ask about the relata themselves, the things connected by causality, and we can also look for connecting relations that aren’t causal relations. Being shaped like a square isn’t a causal relation. It’s a fact that can play a causal role, but it is not itself made of causality.
These are ontological questions, and the fact that we can ask them and even come up with the tentative ontologies that we do, itself must have ontological implications, and then you can attempt an ontological analysis of these implication relations… If you could go down that path, using beyond-Einsteinian intellectual superpowers, you should figure out the true ontology, or as much of it as is accessible to our sort of minds. I consider Husserl to be the person who got the furthest here.
One then wants to correlate this ontology derived from a phenomenological-epistemological circle of reflection, with the world-models produced in physics and biology, but since the latter models just reduce to state-machine models, they cannot in themselves move you beyond ontological hollowness. Eventually you must use an ontology derived from the analysis of conscious experience itself, to interpret the formal ontology employed by natural science. This doesn’t have to imply panpsychism; you may be able to say that some objects really are “things without an inside”, and other objects do “have a subjectivity”, and be able to specify exactly what it is that makes a difference.
This is a little removed from the indexical problem of
why am I me, and not someone else?
That’s a question which probably has no answer, beyond enumerating the causes of what you are. The deep reasons are reserved for why there is something rather than nothing, and why it is the sort of universe it is. But in a universe with many minds, you were always going to be one among many.
If you were to find that the nature of your personal existence looked rather improbable, that would revive the question a little. For example, if we thought electrons were conscious, then being a conscious being at the Avogadro’s-number-of-electrons level of organization, rather than at the single-electron level of organization, might look suspiciously improbable, given the much larger numbers of electrons in the universe. But then the question would be “why am I human, and not just an electron?” which isn’t quite what you asked.
I think, therefore “something” is
I agree with this part.
The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
Now if we ask whether it’s “still you” in both cases—one where you live out your life with physical continuity, and one in which you are briefly eradicated and then replaced by a physical duplicate—you do have some freedom of self-definition, so the answer may depend a little on the definition. (For now I will not consider the Yudkowskian possibility that there is a unique correct definition of personal identity to be found by superintelligent extrapolation of human cognitive dispositions, analogous to the CEV theory of how to arrive at a human-correct morality.)
But there are obvious and not-so-obvious problems with just saying “the configuration’s the same, therefore there’s no difference”. An obvious problem: suppose we make more than one copy of you—are they both “you”? Less obvious: what if the history of how the configuration was created does matter, in deciding whether you are the same person as before?
Does “having the memory of being in a green room” really imply “you have been in a green room”? We don’t normally trust memory that absolutely, and here we are talking about “memories” that were copied into the brain from a blueprint, rather than being caused in the usual fashion, by endogenous processing of sensory input. It is reasonable to imagine that you could be that person, whose brain was rewired in that way, and that after reflecting for long enough on the situation and on how the process worked, you concluded that it wasn’t you who was in that room, or even that nobody was in that room.
I’m not even convinced that the unlimited capacity to recreate a whole conscious mind “in midstream”, implied by so many thought-experiments, is necessarily possible. There are dynamical systems where you just can’t get to places deep in the state-space without crossing intermediate territory. If all that matters for identity is having the right ensemble of mesoscopic computational states (i.e. described at a level of coarseness, relative to the exact microphysical description, which would reduce a whole neuron to just a few bits), then it should be possible to create a person in mid-stream. But if the substrate of consciousness is a single quantum Hilbert space, for some coherent physical subsystem of the brain, then it’s much less obvious that you can do that. You might be able to bang together a classical simulation of what goes on in that Hilbert space, in mid-stream, but that’s the whole point of my version of quantum-mind thinking—that substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
But it’s clear that what we can see of reality is made of more than just causality.
Not to me. For instance, while conciousness is still mysterious to me, it sure has causal power, if only the power to make me think of it —and the causal power to make Chalmers write papers about it.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
I think I mean something stronger than that. You may want to re-read the part of the Quantum Physics sequence. The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense. When you swap 2 atoms, you’re back to square one in a stronger sense than when you swap 2 numbered (but otherwise indistinguishable) billiard balls. Configuration space is folded on itself, so it really is the same configuration, not a different one that happens to be indistinguishable from the inside.
substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
Err… Let my brain be replaced by a silicon chip. Let’s leave aside the question of personal identity. Is that thing concious ? It will behave the same as me, and write about conciousness the same I do. If you believe that, and believe it still isn’t concious, I guess you believe in PZombies. I don’t. Maybe changing my substrate would kill me, but I strongly believe the result is still concious, and human in the dimensions I care about.
For instance, while conciousness is still mysterious to me, it sure has causal power
I agree that consciousness has causal power. I’m saying consciousness is not just causal power. It’s “something” that has causal power. The ontological deficiencies of materialist and computational theories of consciousness all lie in what they say about the nature of this “something”. They say it’s a collection of atoms and/or a computational state machine. The “collection of atoms” theory explains neither the brute features of consciousness like color, nor the subtle features like its “unity”. The state machine theory has the same problems and also requires that you reify a particular abstracted description of the physical reality. In both cases, if one were to insist that that really is the ontological basis of everything, property dualism would be necessary, just to accommodate phenomenological (experiential) reality. But since we now have a physics based on Hilbert spaces and exotic algebras, rather than on particles arranged in space, I would hope to find a physical ontology that can explain consciousness without property dualism, and in which the physical description of the brain contained “entities” which really could be identified with the “entities” constituting conscious experience, and not just correlated with them.
The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense.
The basis for that statement is that when you calculate the transition probability from “particle at x0, particle at y0” to “particle at x1, particle at y1″, you sum over histories where x0 goes to x1 and y0 goes to y1, as well as over histories where x0 goes to y1 and y0 goes to x1. But note that in any individual history, there is persistence of identity.
I suppose the real logic here is something like “I am a particular configuration, and contributions to my amplitude came from histories in which my constituent particles had different origins.” So you ground your identity in the present moment, and deny that you even had a unique previous state.
Pardon me for being skeptical about that claim—that my present moment is either to be regarded as existing timelessly and not actually as one stage in a connected flow of time, or alternatively that it is to be regarded as a confluence of multiple intersecting histories that immediately then diverges into multiple futures rather than a unique one.
The ontological implications of quantum mechanics are far from self-evident. If I truly felt driven to believe in the many worlds interpretation, I would definitely want to start with an ontology of many histories that are self-contained but which are interacting neighbors. In a reality like that, there’s no splitting and joining, there are just inter-world “forces”. For some reason, no-one has even really tried to develop such a model, despite the conservation of probability density flow which allows a formalism like Bohmian mechanics to work.
Returning to the question of identity for particles, another option, which is more in line with my own ideas, is to think of the ontological state as a tensor product of antisymmetrized n-particle states where the size of n is variable both between the tensor factors and during the history of an individual factor. The ontology here is one in which the world isn’t really made of “particles” at all, it’s made of “entities” with a varying number of degrees of freedom, and a “particle” is just an entity with the minimum number of degrees of freedom. The fungibility of “particles” here would only apply to degrees of freedom within a single entity; the appearance of fungibility between different entities would have a dynamical origin. I have no idea whether you can do that in a plausible, uncontrived way; it’s yet another possibility that hasn’t been explored. And there are still more possibilities.
If you believe that, and believe it still isn’t conscious, I guess you believe in PZombies.
Yes, definitely. Especially if we’re going to talk about imperfect simulations, as has been discussed on one or two recent threads. A spambot, or a smiley face on a stick, is a type of “simulated human being”. We definitely agree, there’s no-one home in either of those situations, right? The intuition that an upload would be conscious arises from the belief that a human brain is conscious, a human brain consists of numerous discrete processors in decentralized communication with each other, and so to be conscious must somehow arise from being a particular sort of computational network. But although we don’t know the precise condition, the universality of computation implies that some sufficiently accurate simulation would be capable of reproducing that network of computation in a new medium, in a way that meets the unknown criterion of consciousness, and so therefore conscious uploads must be possible.
I have argued in a recent comment that functionalism, and also ordinary atomistic materialism, implies property dualism. The constituent properties of consciousness, especially the basic sensory properties, do not exist in standard physical ontology, which historically was constructed explicitly to exclude those sensory properties. So if you want to extend physical ontology to account for consciousness as well, you have to add some new ingredients. Personally I hope for a new physical ontology which doesn’t have to be dualistic, and I even just mentioned a possible mathematical ingredient, namely a division of the world into “multi-particle” tensor factors rather than into single particles. If a single whole conscious experience could be identified with a single tensor factor, that would at least begin to explain the unity of consciousness; you would have elementary degrees of freedom canonically and objectively clustered together into complex unities, whereas in the current ontology, you just have mobs of particles whose edges are a bit fuzzy and arbitrary, something which provides a poor ontological foundation for a theory of objectively existing persons.
Returning to the issue of zombies, suppose for the purposes of argument that people really are sharply defined tensor factors of the wavefunction of the universe, and that conscious states, in our current formalism, would correspond to some of these antisymmetrized n-fermion wavefunctions that I’ve mentioned. The point is that, in this scenario, consciousness is always a property of a single tensor factor, but that you could simulate one of those very-high-dimensional tensor factors by using a large number of low-dimensional tensor factors. This implies that you could simulate consciousness without the simulation being conscious.
I don’t at all insist that this is how things work. The business with the tensor factors would be one of my better ideas, but it’s just a beginning—it’s a long conceptual trek from an n-fermion wavefunction to an intricate state of consciousness such as we experience—and the way things actually work may be very very different. What I do insist is that none of the orthodox materialist theories of mind work. An explicit property dualism, such as David Chalmers has proposed, at least has room in its ontology for consciousness, but it seems contrived to me. So I think the answer is some thing that we haven’t thought of yet, that involves quantum biology, new physical ontology, and revived respect for the ontology of mind.
Your writing is difficult to read for me. I’m tired right now, so I plan to answer properly later, in a few days. Hopefully my brain will do better processing.
Maybe my basic point is that there is more to the “stuff” than just “being causal”. This is why I talk about abstracted causal models as ontologically deficient. Describing yourself or the world as a state machine just says that reality is a merry-go-round of “states” which follow each other according to a certain pattern. It says nothing about the nature of those states, except that they follow the pattern. This is why functionalist theories of mind lead to patternist theories of identity.
But it’s clear that what we can see of reality is made of more than just causality. Causal relations are very important constitutive relations, but then we can ask about the relata themselves, the things connected by causality, and we can also look for connecting relations that aren’t causal relations. Being shaped like a square isn’t a causal relation. It’s a fact that can play a causal role, but it is not itself made of causality.
These are ontological questions, and the fact that we can ask them and even come up with the tentative ontologies that we do, itself must have ontological implications, and then you can attempt an ontological analysis of these implication relations… If you could go down that path, using beyond-Einsteinian intellectual superpowers, you should figure out the true ontology, or as much of it as is accessible to our sort of minds. I consider Husserl to be the person who got the furthest here.
One then wants to correlate this ontology derived from a phenomenological-epistemological circle of reflection, with the world-models produced in physics and biology, but since the latter models just reduce to state-machine models, they cannot in themselves move you beyond ontological hollowness. Eventually you must use an ontology derived from the analysis of conscious experience itself, to interpret the formal ontology employed by natural science. This doesn’t have to imply panpsychism; you may be able to say that some objects really are “things without an inside”, and other objects do “have a subjectivity”, and be able to specify exactly what it is that makes a difference.
This is a little removed from the indexical problem of
That’s a question which probably has no answer, beyond enumerating the causes of what you are. The deep reasons are reserved for why there is something rather than nothing, and why it is the sort of universe it is. But in a universe with many minds, you were always going to be one among many.
If you were to find that the nature of your personal existence looked rather improbable, that would revive the question a little. For example, if we thought electrons were conscious, then being a conscious being at the Avogadro’s-number-of-electrons level of organization, rather than at the single-electron level of organization, might look suspiciously improbable, given the much larger numbers of electrons in the universe. But then the question would be “why am I human, and not just an electron?” which isn’t quite what you asked.
I agree with this part.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
Now if we ask whether it’s “still you” in both cases—one where you live out your life with physical continuity, and one in which you are briefly eradicated and then replaced by a physical duplicate—you do have some freedom of self-definition, so the answer may depend a little on the definition. (For now I will not consider the Yudkowskian possibility that there is a unique correct definition of personal identity to be found by superintelligent extrapolation of human cognitive dispositions, analogous to the CEV theory of how to arrive at a human-correct morality.)
But there are obvious and not-so-obvious problems with just saying “the configuration’s the same, therefore there’s no difference”. An obvious problem: suppose we make more than one copy of you—are they both “you”? Less obvious: what if the history of how the configuration was created does matter, in deciding whether you are the same person as before?
Does “having the memory of being in a green room” really imply “you have been in a green room”? We don’t normally trust memory that absolutely, and here we are talking about “memories” that were copied into the brain from a blueprint, rather than being caused in the usual fashion, by endogenous processing of sensory input. It is reasonable to imagine that you could be that person, whose brain was rewired in that way, and that after reflecting for long enough on the situation and on how the process worked, you concluded that it wasn’t you who was in that room, or even that nobody was in that room.
I’m not even convinced that the unlimited capacity to recreate a whole conscious mind “in midstream”, implied by so many thought-experiments, is necessarily possible. There are dynamical systems where you just can’t get to places deep in the state-space without crossing intermediate territory. If all that matters for identity is having the right ensemble of mesoscopic computational states (i.e. described at a level of coarseness, relative to the exact microphysical description, which would reduce a whole neuron to just a few bits), then it should be possible to create a person in mid-stream. But if the substrate of consciousness is a single quantum Hilbert space, for some coherent physical subsystem of the brain, then it’s much less obvious that you can do that. You might be able to bang together a classical simulation of what goes on in that Hilbert space, in mid-stream, but that’s the whole point of my version of quantum-mind thinking—that substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
Not to me. For instance, while conciousness is still mysterious to me, it sure has causal power, if only the power to make me think of it —and the causal power to make Chalmers write papers about it.
I think I mean something stronger than that. You may want to re-read the part of the Quantum Physics sequence. The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense. When you swap 2 atoms, you’re back to square one in a stronger sense than when you swap 2 numbered (but otherwise indistinguishable) billiard balls. Configuration space is folded on itself, so it really is the same configuration, not a different one that happens to be indistinguishable from the inside.
Err… Let my brain be replaced by a silicon chip. Let’s leave aside the question of personal identity. Is that thing concious ? It will behave the same as me, and write about conciousness the same I do. If you believe that, and believe it still isn’t concious, I guess you believe in PZombies. I don’t. Maybe changing my substrate would kill me, but I strongly believe the result is still concious, and human in the dimensions I care about.
I agree that consciousness has causal power. I’m saying consciousness is not just causal power. It’s “something” that has causal power. The ontological deficiencies of materialist and computational theories of consciousness all lie in what they say about the nature of this “something”. They say it’s a collection of atoms and/or a computational state machine. The “collection of atoms” theory explains neither the brute features of consciousness like color, nor the subtle features like its “unity”. The state machine theory has the same problems and also requires that you reify a particular abstracted description of the physical reality. In both cases, if one were to insist that that really is the ontological basis of everything, property dualism would be necessary, just to accommodate phenomenological (experiential) reality. But since we now have a physics based on Hilbert spaces and exotic algebras, rather than on particles arranged in space, I would hope to find a physical ontology that can explain consciousness without property dualism, and in which the physical description of the brain contained “entities” which really could be identified with the “entities” constituting conscious experience, and not just correlated with them.
The basis for that statement is that when you calculate the transition probability from “particle at x0, particle at y0” to “particle at x1, particle at y1″, you sum over histories where x0 goes to x1 and y0 goes to y1, as well as over histories where x0 goes to y1 and y0 goes to x1. But note that in any individual history, there is persistence of identity.
I suppose the real logic here is something like “I am a particular configuration, and contributions to my amplitude came from histories in which my constituent particles had different origins.” So you ground your identity in the present moment, and deny that you even had a unique previous state.
Pardon me for being skeptical about that claim—that my present moment is either to be regarded as existing timelessly and not actually as one stage in a connected flow of time, or alternatively that it is to be regarded as a confluence of multiple intersecting histories that immediately then diverges into multiple futures rather than a unique one.
The ontological implications of quantum mechanics are far from self-evident. If I truly felt driven to believe in the many worlds interpretation, I would definitely want to start with an ontology of many histories that are self-contained but which are interacting neighbors. In a reality like that, there’s no splitting and joining, there are just inter-world “forces”. For some reason, no-one has even really tried to develop such a model, despite the conservation of probability density flow which allows a formalism like Bohmian mechanics to work.
Returning to the question of identity for particles, another option, which is more in line with my own ideas, is to think of the ontological state as a tensor product of antisymmetrized n-particle states where the size of n is variable both between the tensor factors and during the history of an individual factor. The ontology here is one in which the world isn’t really made of “particles” at all, it’s made of “entities” with a varying number of degrees of freedom, and a “particle” is just an entity with the minimum number of degrees of freedom. The fungibility of “particles” here would only apply to degrees of freedom within a single entity; the appearance of fungibility between different entities would have a dynamical origin. I have no idea whether you can do that in a plausible, uncontrived way; it’s yet another possibility that hasn’t been explored. And there are still more possibilities.
Yes, definitely. Especially if we’re going to talk about imperfect simulations, as has been discussed on one or two recent threads. A spambot, or a smiley face on a stick, is a type of “simulated human being”. We definitely agree, there’s no-one home in either of those situations, right? The intuition that an upload would be conscious arises from the belief that a human brain is conscious, a human brain consists of numerous discrete processors in decentralized communication with each other, and so to be conscious must somehow arise from being a particular sort of computational network. But although we don’t know the precise condition, the universality of computation implies that some sufficiently accurate simulation would be capable of reproducing that network of computation in a new medium, in a way that meets the unknown criterion of consciousness, and so therefore conscious uploads must be possible.
I have argued in a recent comment that functionalism, and also ordinary atomistic materialism, implies property dualism. The constituent properties of consciousness, especially the basic sensory properties, do not exist in standard physical ontology, which historically was constructed explicitly to exclude those sensory properties. So if you want to extend physical ontology to account for consciousness as well, you have to add some new ingredients. Personally I hope for a new physical ontology which doesn’t have to be dualistic, and I even just mentioned a possible mathematical ingredient, namely a division of the world into “multi-particle” tensor factors rather than into single particles. If a single whole conscious experience could be identified with a single tensor factor, that would at least begin to explain the unity of consciousness; you would have elementary degrees of freedom canonically and objectively clustered together into complex unities, whereas in the current ontology, you just have mobs of particles whose edges are a bit fuzzy and arbitrary, something which provides a poor ontological foundation for a theory of objectively existing persons.
Returning to the issue of zombies, suppose for the purposes of argument that people really are sharply defined tensor factors of the wavefunction of the universe, and that conscious states, in our current formalism, would correspond to some of these antisymmetrized n-fermion wavefunctions that I’ve mentioned. The point is that, in this scenario, consciousness is always a property of a single tensor factor, but that you could simulate one of those very-high-dimensional tensor factors by using a large number of low-dimensional tensor factors. This implies that you could simulate consciousness without the simulation being conscious.
I don’t at all insist that this is how things work. The business with the tensor factors would be one of my better ideas, but it’s just a beginning—it’s a long conceptual trek from an n-fermion wavefunction to an intricate state of consciousness such as we experience—and the way things actually work may be very very different. What I do insist is that none of the orthodox materialist theories of mind work. An explicit property dualism, such as David Chalmers has proposed, at least has room in its ontology for consciousness, but it seems contrived to me. So I think the answer is some thing that we haven’t thought of yet, that involves quantum biology, new physical ontology, and revived respect for the ontology of mind.
Your writing is difficult to read for me. I’m tired right now, so I plan to answer properly later, in a few days. Hopefully my brain will do better processing.