The observation that quantum mechanics, when expressed in a form which makes “measurement” an undefined basic concept, does not provide an objective and self-sufficient account of reality...
This is wrong from the beginning: a measurement simply refers to the creation of mutual information, which is indeed well-defined. In quantum-level descriptions, it is represented by entanglement between components of the wavefunction. (The concept of mutual information then meshes nicely with the known observations of thermodynamics in allowing “entropy” to be precisely defined in information-theoretic terms.)
So what’s my problem? Why don’t I just devote the rest of my life to the achievement of this vision? There are two, maybe three amendments I would wish to make. What I call the ontological problem has not been addressed; the problem of consciousness, which is the main subproblem of the ontological problem, is also passed over; …
… Not only is it an unsolved problem, but we are trying to solve it in the wrong way: presupposing the desiccated ontology of our mathematical physics, and trying to fit the diversities of phenomenological ontology into that framework. This is, I submit, entirely the wrong way round. One should instead proceed as follows: I exist, and among my properties are that I experience what I am experiencing, and that there is a sequence of such experiences.If I can free my mind from the assumption that the known classes of abstract object are all that can possibly exist, what sort of entity do I appear to be? Phenomenology—self-observation—thereby turns into an ontology of the self, and if you’ve done it correctly (I’m not saying this is easy), you have the beginning of a new ontology which by design accommodates the manifest realities of consciousness. The task then becomes to reconstitute or reinterpret the world according to mathematical physics in a way which does not erase anything you think you established in the phenomenological phase of your theory-building.
If I understand you correctly, you’re saying, start from self-observation, but permit the self to be ontologically basic, then re-interpret mathematical physics so that it doesn’t deny your conscious experience.
Apart from permitting the self to be ontologically basic, I don’t see how this differs from Yudkowsky’s approach. You seem to be under the false impression that he wants to somehow deny the reality of consciousness. But he doesn’t—instead, he says to ask questions of the form, “why do I believe I’m conscious, or have the feeling of consciousness?” and then search through the ways that mathematical physics would allow it to be generated, at which point it starts to match your approach.
So what do you gain from positing the ontologically-basic self? It’s not progress, because you’re still left with the problem of why (you can reasonably infer) there is consciousness in all of these other beings you observe. Why is it so correlated with a kind of biological form, one that goes from not-conscious to conscious? How does this self thing work in the forms that we see it working in? But once you know the answer to those questions, what purpose does the additional ontological supposition serve?
Incidentally, have you actually exhausted all the ways to account for your consciousness? Read the free will) series? Tried to represent consciousness using the entire information theoretic toolbox (mutual information, conditional independence, entropy maximization, etc.)?
a measurement simply refers to the creation of mutual information
I did say: “quantum mechanics, when expressed in a form which makes “measurement” an undefined basic concept”, which is how it is often expressed. And maybe you’ll even agree with me that in that form, it is definitely incomplete.
what do you gain from positing the ontologically-basic self?
As for how such an entity relates to everything else, that’s the point of exploring a monadological interpretation of the physics (and hence the biology) that we already have. But I don’t mind if you want evidence of functionally relevant quantum coherence in the brain before you take it seriously.
have you actually exhausted all the ways to account for your consciousness?
Reverting to the case of color, the information-theoretic analysis brings us no closer to getting actual shades of color existing anywhere in a universe of electrons and quarks. It just talks about correlations and dependencies among the states and behaviors of colorless aggregates of colorless particles. Since actual (phenomenal) color is causally relevant—we see it, we talk about it—you can perform an info-theoretic analysis of its causes, correlates, and effects too. But just having an ontology with a similar causal structure is not going to give you the thing itself. I endorse 100% the causal analysis of qualia, as a pathway to knowledge, but not their causal reduction.
I did say: “quantum mechanics, when expressed in a form which makes “measurement” an undefined basic concept”, which is how it is often expressed.
Well, why were you directing that remark at an audience that doesn’t leave measurement as an undefined basic concept, and implying that the audience falls prey to this error?
I escape the conscious sorites paradox without vaguing out.
You crucially depend on the remaining vagueness of the exact moment when “a shade of blue” arises, and you fail to produce any more specificity than in your latest post.
Looks to me like you didn’t escape either.
As for how such an entity relates to everything else, that’s the point of exploring a monadological interpretation of the physics (and hence the biology) that we already have. But I don’t mind if you want evidence of functionally relevant quantum coherence in the brain before you take it seriously.
The problem is more fundamental than that—I would need to know why I get to the physics-based explanation faster by making your ontological assumptions, not just the fact that you could fit it into your ontology.
Reverting to the case of color, the information-theoretic analysis brings us no closer to getting actual shades of color existing anywhere in a universe of electrons and quarks.
Just like an information theoretic analysis of a program brings us no closer to getting actual labels for the program’s referents.
why were you directing that remark at an audience that doesn’t leave measurement as an undefined basic concept, and implying that the audience falls prey to this error?
What I actually said (check the original sentence!) was that this audience recognizes the error and advocates many-worlds as the answer.
You crucially depend on the remaining vagueness of the exact moment when “a shade of blue” arises, and you fail to produce any more specificity than in your latest post.
This apparently runs several things together. Unfortunately I see no way to respond without going into tedious detail.
You asked: why say the self is a single object? I answered: so I don’t have to regard its existence as a vague (un-objective) matter. Bear in mind that the motivating issue here is not the existence of color, but the objectivity of the existence of the self. If selves are identified with physical aggregates whose spatial boundaries are somewhat arbitrary, then the very existence of a self becomes a matter of definition rather than a matter of fact.
In your remark quoted above, you seem to be thinking of two things at once. First, I asked Psychohistorian when it is that color comes into being, if it is indeed implicitly there in the physics we have. Second, I have not offered an exact account of when, where and how subjective color exists in the conscious monad and how this relates to the quantum formalism. Finally comes the concluding criticism that I am therefore tolerating vagueness in my own framework, in a way that I don’t tolerate in others.
There are maybe three things going on here. In the original discussions surrounding the Sorites paradox (and Robin Hanson’s mangled worlds), it was proposed that there is no need to have a fully objective and non-arbitrary concept of self (or of world). This makes vagueness into a principle: it’s not just that the concept is underdetermined, it’s asserted that there is no need to make it fully exact.
The discussion with Psychohistorian proceeds in a different direction. Psychohistorian hasn’t taken a stand in favor of vagueness. I was able to ask my question because no-one has an exact answer, Psychohistorian included, but Psychohistorian at least didn’t say “we don’t need an exact answer”—and so didn’t “vague out”.
For the same reason, I’m not vaguing out just because I don’t yet have an exact theory of my own about color. I say it’s there, it’s going to be somewhere in the monad, and that it is nowhere in physics as conventionally understood, not even in a stimulus-classifying brain.
I hate it when discussions bog down in this sort of forensic re-analysis of what everyone was saying, so I hope you can pick out the parts which matter.
The problem is more fundamental than that—I would need to know why I get to the physics-based explanation faster by making your ontological assumptions, not just the fact that you could fit it into your ontology.
The ontological assumptions are made primarily so I don’t have to disbelieve in the existence of time, color, or myself. They’re not made so as to expedite biophysical progress, though they might do so if they’re on the right track.
Just like an information theoretic analysis of a program brings us no closer to getting actual labels for the program’s referents.
Colors are phenomena, not labels. It’s the names of colors which are labels, for contingent collections of individual shades of color. There is no such thing as objective “redness” per se, but there are individual shades of color which may or may not classify as red. It’s all the instances of color which are the ontological problem; the way we group them is not the problem.
There are maybe three things going on here. In the original discussions surrounding the Sorites paradox (and Robin Hanson’s mangled worlds), it was proposed that there is no need to have a fully objective and non-arbitrary concept of self (or of world). This makes vagueness into a principle: it’s not just that the concept is underdetermined, it’s asserted that there is no need to make it fully exact.
The discussion with Psychohistorian proceeds in a different direction. Psychohistorian hasn’t taken a stand in favor of vagueness. I was able to ask my question because no-one has an exact answer, Psychohistorian included, but Psychohistorian at least didn’t say “we don’t need an exact answer”—and so didn’t “vague out”.
Fair point about missing the context on my part, and I should have done better, since I rip on others when they do the same—just ask Z M Davis!
Still, if this is what’s going on here—if you think rejection of your ontology forces you into one of two unpalatable positions, one represented by Robin_Hanson, and the other by Psychohistorian—then this rock-and-a-hard-place problem of identity should have been in your main post to show what the problem is, and I can’t infer that issue from reading it.
The ontological assumptions are made primarily so I don’t have to disbelieve in the existence of time, color, or myself.
Again, nothing in the standard LW handling requires you to disbelieve in any of those things, at the subjective level; it’s just that they are claimed to arise from more fundamental phenomena.
They’re not made so as to expedite biophysical progress, though they might do so if they’re on the right track.
Then I’m lost: normally, the reason to propose e.g. a completely new ontology is to eliminate a confusion from the beginning, thereby enhancing your ability to achieve useful insights. But you’re position is: buy into my ontology, even though it’s completely independent of your ability to find out how consciousness works. That’s even worse than a fake explanation!
Just like an information theoretic analysis of a program brings us no closer to getting actual labels for the program’s referents.
Colors are phenomena, not labels. It’s the names of colors which are labels, for contingent collections of individual shades of color. There is no such thing as objective “redness” per se, but there are individual shades of color which may or may not classify as red. It’s all the instances of color which are the ontological problem; the way we group them is not the problem.
I think you’re misunderstanding the Drescher analogy I described. The gensyms don’t map to our terms for color, or classifications for color; they map to our phenomenal experience of color. That is, the distinctiveness of experiencing red, as differentiated from other aspects of your consciousness, is like the distinctiveness of several generated symbols within a program.
The program is able to distinguish between gensyms, but the comparison of their labels across different program instances is not meaningful. If that’s not a problem in need of a solution, neither should qualia be, since qualia can be viewed as the phenomenon of being able to distinguish between different data structures, as seen from the inside.
(To put it another way, your experience of color has be different enough so that you don’t treat color data as sound data.)
I emphasize that Drescher has not “closed the book” on the issue; there’s still work to be done. But you can see how qualia can be approached within the reductionist ontology espoused here.
In retrospect, I think this would have been a better order of exposition for the monadology article:
Start with a general discussion of the wavefunction of the brain, implicitly within a many-worlds framework, i.e. nothing about collapse. Most of us here will agree that this ought to be a conceptually valid enterprise, though irrelevant to biology because of decoherence.
Next, bring up the possibility of quantum effects being relevant to the brain’s information processing after all. In the absence of specific evidence, readers might be skeptical, but it’s still a logically valid concept, a what-if that doesn’t break any laws of physics.
Next, try to convey the modification to the quantum formalism that I described, whereby fundamental degrees of freedom (such as string-theoretic D0-branes) become entangled in island sets that are disentangled from everything else. (The usual situation is that everything is entangled with everything else, albeit to a vanishingly small degree.) There might be a few more frowns at this point, perplexed wondering as to where this is all leading, but it’s still just mathematics.
And finally say, these island sets are “monads”, and your subjective experience is the inner state of one big monad, and the actual qualities and relations which make up reality (at least in this case) are the ones which are subjectively manifest, rather than the abstractions we use for formal representation and calculation. This is the part that sounds like “woo”, where all these strange ologies like ontology, phenomenology, and monadology, show up, and it’s the part which is getting the strongest negative reaction.
In privately developing these ideas I started with the phenomenology, and worked backwards from there towards the science we have, but an exposition which started with the science we have and incrementally modified it towards the phenomenology might at least have made 3/4ths of the framework sound comprehensible (the three steps where it’s all still just mathematics).
Then again, by starting in the middle and emphasizing the ontological issues, at least everyone got an early taste of the poison pill beneath the mathematical chocolate. Which might be regarded as a better outcome, if you want to keep woo out of your system at all costs.
Next, try to convey the modification to the quantum formalism that I described, whereby fundamental degrees of freedom (such as string-theoretic D0-branes) become entangled in island sets that are disentangled from everything else. (The usual situation is that everything is entangled with everything else, albeit to a vanishingly small degree.)
I don’t see why you need to bring in quantum mechanics or D0-branes here. What you’re describing is just a standard case of a low-entropy, far-from-equilibrium island, also known as a dissipative system, which includes Benard cells, control systems, hurricanes, and indeed life itself.
They work by using a fuel source (negentropy) to create internal order, far from equilibrium with their environment, and have to continually export enough entropy (disorder) to make up for that which they create inside. While the stabilizing/controlling aspect of them will necessarily be correlated with the external “disturbances”, some part of it will be screened off from the environment, and therefore disentangled.
In fact, the angle I’ve been working is to see if I can come up with a model whereby consciousness arises wherever natural, causal forces align to allow local screening off from the environment (which we know life does anyway) plus a number of other conditions I’m still figuring out.
We could of followed it better that way but there is still no reasoning in those steps. The right way to structure this discussion is to start with the problems you want to solve (and why they’re problems) and then explain how to solve them. This outline still has nothing motivating it. What people need to see is what you think needs explaining and how you theory explains it best. Its true that the math might be acceptable enough but no one really wants to spend their time doing math to solve problems they don’t think are problems or for explaining the behavior of things they don’t think exist.
It is as if someone showed up with all this great math that they said described God. Math is fun and all but we’d want to know why you think there is a God that these equations describe!
Yes, as a strategy to convince your readers, the new ordering would likely be more effective. However, re-ordering or re-phrasing your reasoning in order to be more rhetorically effective is not good truth-seeking behavior. Yudkowsky’s “fourth virtue of evenness” seems relevant here.
The counterargument that Mitchell Porter’s critics bring is recognizable as “Argument From Bias”. Douglas Walton describes it in this way:
Major Premise: If x is biased, then x is less likely to have taken the evidence on both sides into account in arriving at conclusion A
Minor Premise: Arguer a is biased.
Conclusion: Arguer a is less likely to have taken the evidence on both sides into account in arriving at conclusion A.
In this case, the bias in question is Mitchell Porter’s commitment to ontologically basic mental entities (monads or similar). We must discount his conclusions regarding the deep reading of physics that he has apparently done, because a biased individual might cherry-pick results from physics that lead to the preferred conclusion.
This discounting is only partial, of course—there is a chance of cherry-picking, not a certainty.
A few points:
This is wrong from the beginning: a measurement simply refers to the creation of mutual information, which is indeed well-defined. In quantum-level descriptions, it is represented by entanglement between components of the wavefunction. (The concept of mutual information then meshes nicely with the known observations of thermodynamics in allowing “entropy” to be precisely defined in information-theoretic terms.)
If I understand you correctly, you’re saying, start from self-observation, but permit the self to be ontologically basic, then re-interpret mathematical physics so that it doesn’t deny your conscious experience.
Apart from permitting the self to be ontologically basic, I don’t see how this differs from Yudkowsky’s approach. You seem to be under the false impression that he wants to somehow deny the reality of consciousness. But he doesn’t—instead, he says to ask questions of the form, “why do I believe I’m conscious, or have the feeling of consciousness?” and then search through the ways that mathematical physics would allow it to be generated, at which point it starts to match your approach.
So what do you gain from positing the ontologically-basic self? It’s not progress, because you’re still left with the problem of why (you can reasonably infer) there is consciousness in all of these other beings you observe. Why is it so correlated with a kind of biological form, one that goes from not-conscious to conscious? How does this self thing work in the forms that we see it working in? But once you know the answer to those questions, what purpose does the additional ontological supposition serve?
Incidentally, have you actually exhausted all the ways to account for your consciousness? Read the free will) series? Tried to represent consciousness using the entire information theoretic toolbox (mutual information, conditional independence, entropy maximization, etc.)?
I did say: “quantum mechanics, when expressed in a form which makes “measurement” an undefined basic concept”, which is how it is often expressed. And maybe you’ll even agree with me that in that form, it is definitely incomplete.
I escape the conscious sorites paradox without vaguing out.
As for how such an entity relates to everything else, that’s the point of exploring a monadological interpretation of the physics (and hence the biology) that we already have. But I don’t mind if you want evidence of functionally relevant quantum coherence in the brain before you take it seriously.
Reverting to the case of color, the information-theoretic analysis brings us no closer to getting actual shades of color existing anywhere in a universe of electrons and quarks. It just talks about correlations and dependencies among the states and behaviors of colorless aggregates of colorless particles. Since actual (phenomenal) color is causally relevant—we see it, we talk about it—you can perform an info-theoretic analysis of its causes, correlates, and effects too. But just having an ontology with a similar causal structure is not going to give you the thing itself. I endorse 100% the causal analysis of qualia, as a pathway to knowledge, but not their causal reduction.
Well, why were you directing that remark at an audience that doesn’t leave measurement as an undefined basic concept, and implying that the audience falls prey to this error?
You crucially depend on the remaining vagueness of the exact moment when “a shade of blue” arises, and you fail to produce any more specificity than in your latest post.
Looks to me like you didn’t escape either.
The problem is more fundamental than that—I would need to know why I get to the physics-based explanation faster by making your ontological assumptions, not just the fact that you could fit it into your ontology.
Just like an information theoretic analysis of a program brings us no closer to getting actual labels for the program’s referents.
What I actually said (check the original sentence!) was that this audience recognizes the error and advocates many-worlds as the answer.
This apparently runs several things together. Unfortunately I see no way to respond without going into tedious detail.
You asked: why say the self is a single object? I answered: so I don’t have to regard its existence as a vague (un-objective) matter. Bear in mind that the motivating issue here is not the existence of color, but the objectivity of the existence of the self. If selves are identified with physical aggregates whose spatial boundaries are somewhat arbitrary, then the very existence of a self becomes a matter of definition rather than a matter of fact.
In your remark quoted above, you seem to be thinking of two things at once. First, I asked Psychohistorian when it is that color comes into being, if it is indeed implicitly there in the physics we have. Second, I have not offered an exact account of when, where and how subjective color exists in the conscious monad and how this relates to the quantum formalism. Finally comes the concluding criticism that I am therefore tolerating vagueness in my own framework, in a way that I don’t tolerate in others.
There are maybe three things going on here. In the original discussions surrounding the Sorites paradox (and Robin Hanson’s mangled worlds), it was proposed that there is no need to have a fully objective and non-arbitrary concept of self (or of world). This makes vagueness into a principle: it’s not just that the concept is underdetermined, it’s asserted that there is no need to make it fully exact.
The discussion with Psychohistorian proceeds in a different direction. Psychohistorian hasn’t taken a stand in favor of vagueness. I was able to ask my question because no-one has an exact answer, Psychohistorian included, but Psychohistorian at least didn’t say “we don’t need an exact answer”—and so didn’t “vague out”.
For the same reason, I’m not vaguing out just because I don’t yet have an exact theory of my own about color. I say it’s there, it’s going to be somewhere in the monad, and that it is nowhere in physics as conventionally understood, not even in a stimulus-classifying brain.
I hate it when discussions bog down in this sort of forensic re-analysis of what everyone was saying, so I hope you can pick out the parts which matter.
The ontological assumptions are made primarily so I don’t have to disbelieve in the existence of time, color, or myself. They’re not made so as to expedite biophysical progress, though they might do so if they’re on the right track.
Colors are phenomena, not labels. It’s the names of colors which are labels, for contingent collections of individual shades of color. There is no such thing as objective “redness” per se, but there are individual shades of color which may or may not classify as red. It’s all the instances of color which are the ontological problem; the way we group them is not the problem.
Fair point about missing the context on my part, and I should have done better, since I rip on others when they do the same—just ask Z M Davis!
Still, if this is what’s going on here—if you think rejection of your ontology forces you into one of two unpalatable positions, one represented by Robin_Hanson, and the other by Psychohistorian—then this rock-and-a-hard-place problem of identity should have been in your main post to show what the problem is, and I can’t infer that issue from reading it.
Again, nothing in the standard LW handling requires you to disbelieve in any of those things, at the subjective level; it’s just that they are claimed to arise from more fundamental phenomena.
Then I’m lost: normally, the reason to propose e.g. a completely new ontology is to eliminate a confusion from the beginning, thereby enhancing your ability to achieve useful insights. But you’re position is: buy into my ontology, even though it’s completely independent of your ability to find out how consciousness works. That’s even worse than a fake explanation!
I think you’re misunderstanding the Drescher analogy I described. The gensyms don’t map to our terms for color, or classifications for color; they map to our phenomenal experience of color. That is, the distinctiveness of experiencing red, as differentiated from other aspects of your consciousness, is like the distinctiveness of several generated symbols within a program.
The program is able to distinguish between gensyms, but the comparison of their labels across different program instances is not meaningful. If that’s not a problem in need of a solution, neither should qualia be, since qualia can be viewed as the phenomenon of being able to distinguish between different data structures, as seen from the inside.
(To put it another way, your experience of color has be different enough so that you don’t treat color data as sound data.)
I emphasize that Drescher has not “closed the book” on the issue; there’s still work to be done. But you can see how qualia can be approached within the reductionist ontology espoused here.
In retrospect, I think this would have been a better order of exposition for the monadology article:
Start with a general discussion of the wavefunction of the brain, implicitly within a many-worlds framework, i.e. nothing about collapse. Most of us here will agree that this ought to be a conceptually valid enterprise, though irrelevant to biology because of decoherence.
Next, bring up the possibility of quantum effects being relevant to the brain’s information processing after all. In the absence of specific evidence, readers might be skeptical, but it’s still a logically valid concept, a what-if that doesn’t break any laws of physics.
Next, try to convey the modification to the quantum formalism that I described, whereby fundamental degrees of freedom (such as string-theoretic D0-branes) become entangled in island sets that are disentangled from everything else. (The usual situation is that everything is entangled with everything else, albeit to a vanishingly small degree.) There might be a few more frowns at this point, perplexed wondering as to where this is all leading, but it’s still just mathematics.
And finally say, these island sets are “monads”, and your subjective experience is the inner state of one big monad, and the actual qualities and relations which make up reality (at least in this case) are the ones which are subjectively manifest, rather than the abstractions we use for formal representation and calculation. This is the part that sounds like “woo”, where all these strange ologies like ontology, phenomenology, and monadology, show up, and it’s the part which is getting the strongest negative reaction.
In privately developing these ideas I started with the phenomenology, and worked backwards from there towards the science we have, but an exposition which started with the science we have and incrementally modified it towards the phenomenology might at least have made 3/4ths of the framework sound comprehensible (the three steps where it’s all still just mathematics).
Then again, by starting in the middle and emphasizing the ontological issues, at least everyone got an early taste of the poison pill beneath the mathematical chocolate. Which might be regarded as a better outcome, if you want to keep woo out of your system at all costs.
I don’t see why you need to bring in quantum mechanics or D0-branes here. What you’re describing is just a standard case of a low-entropy, far-from-equilibrium island, also known as a dissipative system, which includes Benard cells, control systems, hurricanes, and indeed life itself.
They work by using a fuel source (negentropy) to create internal order, far from equilibrium with their environment, and have to continually export enough entropy (disorder) to make up for that which they create inside. While the stabilizing/controlling aspect of them will necessarily be correlated with the external “disturbances”, some part of it will be screened off from the environment, and therefore disentangled.
In fact, the angle I’ve been working is to see if I can come up with a model whereby consciousness arises wherever natural, causal forces align to allow local screening off from the environment (which we know life does anyway) plus a number of other conditions I’m still figuring out.
We could of followed it better that way but there is still no reasoning in those steps. The right way to structure this discussion is to start with the problems you want to solve (and why they’re problems) and then explain how to solve them. This outline still has nothing motivating it. What people need to see is what you think needs explaining and how you theory explains it best. Its true that the math might be acceptable enough but no one really wants to spend their time doing math to solve problems they don’t think are problems or for explaining the behavior of things they don’t think exist.
It is as if someone showed up with all this great math that they said described God. Math is fun and all but we’d want to know why you think there is a God that these equations describe!
Yes, as a strategy to convince your readers, the new ordering would likely be more effective. However, re-ordering or re-phrasing your reasoning in order to be more rhetorically effective is not good truth-seeking behavior. Yudkowsky’s “fourth virtue of evenness” seems relevant here.
The counterargument that Mitchell Porter’s critics bring is recognizable as “Argument From Bias”. Douglas Walton describes it in this way:
Major Premise: If x is biased, then x is less likely to have taken the evidence on both sides into account in arriving at conclusion A
Minor Premise: Arguer a is biased.
Conclusion: Arguer a is less likely to have taken the evidence on both sides into account in arriving at conclusion A.
In this case, the bias in question is Mitchell Porter’s commitment to ontologically basic mental entities (monads or similar). We must discount his conclusions regarding the deep reading of physics that he has apparently done, because a biased individual might cherry-pick results from physics that lead to the preferred conclusion.
This discounting is only partial, of course—there is a chance of cherry-picking, not a certainty.