There are a lot of reasons that people aren’t responding positively to your comments. One of which I think hasn’t been addressed is that this to a large extent pattern matches to a bad set of metapatterns in history. In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible. So they look at this and think that it isn’t a promising line of inquiry. Now, this may be unfair, but I don’t think it really is very unfair. The notion that there are irreducible or even reducible but strongly dualist aspects of our universe seems to a class of hypotheses which has been repeatedly falsified. So it is fair for someone to by default to assign a low probability to similar hypotheses.
You have other bits that make worrying signals about your rationality or level intentions, like when you write things like:
I don’t mean that I’m suicidal, I mean that I can’t eat air. I spent a year getting to this level in physics, so I could perform this task.
This bit not only made me sit up in alarm, it substantially reduced how seriously I should take your ideas. Previously, my thought process was “This seems wrong, but Porter seems to know a decent amount of physics, more than I do in some aspects, maybe I should update more to taking this sort of hypothesis more seriously?” Although Penrose has already done that, so you wouldn’t cause that big an update, this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line. This isn’t as extreme as say Jonathan Wells) who got a PhD in biology so he could “destroy Darwinism” but it does seem similar. The primary difference is that Wells seemed interested in the degree for its rhetorical power, whereas you seem genuinely interested in actually working out the truth. But to a casual observer who just read this post, they’d see a very strong match here.
I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don’t have the social capital/status here to get away with it. Eliezer gets away with it even from the people here who don’t consider the Singularity Institute to be a great way of fighting existential risk, because it is hard to have higher status than being the website’s founder (although lukeprog and Yvain might be managing to beat that in some respects). In this context, making a point about how you just want loans at some level reduces status even further. One thing that you may want to consider is looking for other similar sources of funding that are broader and don’t have the same underlying status system. Kickstarter would be an obvious one.
In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible.
Sometimes progress consists of doubling back to an older attitude, but at a higher level. Revolutions have excesses. The ghost in the machine haunts us, the more we take the machine apart. I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.
this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line.
Or it’s like learning anatomy, physiology, and genetics, so you can cure a disease. Certainly my thinking about physics has a much higher level of concreteness now, because I have much more to work with, and I have new ideas about details—maybe it’s complexes of twistor polytopes, rather than evolving tensor networks. But I’ve found no reason to question the original impetus.
I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don’t have the social capital/status here to get away with it.
I believe most of the downvotes are coming because of the claims I make (about what might be true and what can’t be true) - I get downvotes whenever I say this stuff. Also because it’s written informally rather than like a scholarly argumentative article (that’s due to writing it all in a rush), and it contains statements to the effect that “too many of you just don’t get it”. Talking about money is just the final straw, I think.
But actually I think it’s going OK. There’s communication happening, issues are being aired and resolved, and there will have been progress, one way or another, by the time the smoke clears.
However, I do want to say that this comment of yours was not bad as an exercise in dispassionate analysis of what causes might be at work in the situation.
One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis. I mean sentences like this:
I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.
As a philosopher myself, I appreciate the usefulness of jargon from time to time, but you sometimes have the air of throwing it around for the sheer joy of it. Furthermore, I (at least) find that that sort of style can sometimes feel like you’re deliberately trying to obscure your point, or that it’s camoflage to conceal any dubious parts.
One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis.
What he said.
I have difficulty understand what Mitchell Porter is trying to say when he talks about this topic. When I run into something that is difficult to understand in this manner, I usually find that, upon closer examination, it usually turns out that I didn’t understand it because it doesn’t make any sense in the first place. And, as far as I can tell, this is also true of what Mitchell Porter, too.
I claim that colors obviously exist, because they are all around us, and I also claim that they do not exist in standard physical ontology. Is that much clear?
I disagree that colors do not exist in standard physical ontology, and find the claim rather absurd on its face. (I’m not entirely sure what ontology is, but I think I’ve picked up the meaning from context.)
I don’t know every last detail of how the experience of color is created by the interaction of of light waves, eyes, and neurons, but I know that that’s where it comes from.
An ontology is a theory about what it is that exists. I have to speak of “physical ontology” and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn’t have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.
So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don’t have color, sound, or any of those “feels” which get filed under qualia.
I don’t know every last detail of how the experience of color is created by the interaction of light waves, eyes, and neurons, but I know that that’s where it comes from.
Asking “where is the experienced color in the physical brain?” shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location—the property of always being at some point in space—and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn’t actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles—e.g. “location of center of mass” or “having a part at location x0 and another part at x1″. We can even extend to counterfactual properties, e.g. “the property of flying apart if a heavy third particle were to fly past on a certain trajectory”.
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that’s a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional “physical” predicates that can also enter into the definition.
I presume that most modern people don’t consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no “minds” or “psychological states”, there are just bodies that speak. It’s the classic case of accounting for experience by only accounting for what people say about experience.
Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.
But I claim they can’t be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience—it is the experience—and you say that these neuronal sheets which behave like the parts of an experience still aren’t the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).
There are two peculiarities to this second option. First, haven’t we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can’t be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn’t it sound like a soul—something that’s not a network of neurons, but a single thing; the single place where the whole experience is localized?
I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there’s no reason why the elementary internal properties of an entangled system can’t literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity—not the structure of a collective behavior (as when we talk about the state of a neuron as “firing” or “not firing”, a description which passes over the intricate microscopic detail of the exact detailed state).
The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but “fundamental”, in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it’s not a virtual machine.
Maybe that’s the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it’s a state machine at the “bottom level of implementation”. If consciousness is a virtual state machine—so I argue—then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don’t intrinsically have.
If you are just making a causal model of something, there’s no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you’re modeling. But consciousness isn’t just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can’t just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it’s the “Cartesian theater” where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.
(That is, the experience of willing is awareness of a certain type of causality taking place. I’m not saying that the will is a quale; the will is just the self in its causal role, and there are “qualia of the will” which constitute the experience of having a will, and they result from reflective awareness of the self’s causal role and causal power… Or at least, these are my private speculations. )
I’ll guess that my prose got a little difficult again towards the end, but that’s how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location—the property of always being at some point in space—and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn’t actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles—e.g. “location of center of mass” or “having a part at location x0 and another part at x1″. We can even extend to counterfactual properties, e.g. “the property of flying apart if a heavy third particle were to fly past on a certain trajectory”.
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that’s a little absurd.
Well, it sounds quite reasonable to me to say that if you arrange elementary particles in a certain, complicated way, you get an instance of something that experiences greenness. To me, this is no different than saying that that if you arrange particles in a certain, complicated way, you get a diamond. We just happen to know a lot more about what particle configurations create “diamondness” than “experience of green”ness. (As a matter of fact, we know exactly how to define “diamondness” as a function of particle type and arrangement.)
So, at this point I apply the Socratic method...
Are we in agreement that a “diamond” is a thing that exists? (My answer: Yes—we can recognize diamonds when we see them.)
Is the property “is a diamond” one that can be defined in terms of “quantitative and logical conjunctions of the properties of individual particles”? (My answer: Yes, because we know that diamonds are made of carbon atoms arranged in a specific pattern.)
Hopefully we agree on these answers! And if we do, can you tell me what the difference is between the predicate “is experiencing greenness” and “is a diamond” such that we can tell, in the real world, if something is a diamond by looking at the particles that make it up, and that it is impossible, in principle, to do the same for “is experiencing greenness”?
What I think your mistake is, is that you underestimate the scope of just what “quantitative and logical conjunctions of the properties of individual particles” can actually describe. Which is, literally, anything at all that can be described with mathematics, assuming you’re allowing all the standard operators of predicate logic and of arithmetic. And that would include the function that maps “arrangements of particles” as an input and returns “true” if the arrangement of particles included a brain that was experiencing green and “false” otherwise—even though we humans don’t actually know what that function is!
But I claim they can’t be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
To sum up, I assert that you are mistaken when you say that there is is an ontological mismatch—the sheet of neurons does indeed contain the experience of green. You are literally making the exact same error that Eliezer’s strawman makes in Angry Atoms.
By talking about “experience of green”, “experiencing greenness”, etc, you get to dodge the question of whether greenness itself is there or not. Do you agree that there is something in reality that is actually green, namely, certain parts of experiences? Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?
Do you agree that there is something in reality that is actually green, namely, certain parts of experiences?
No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn’t illusion?
Well, I’m used to using the word “green” to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system) and not experiences. As in, “This apple is green” or “I see something that looks green.” Which is why I used the expression “experience of greenness”, because that’s the best translation I can think of for what you’re saying into CronoDAS-English.
So the question
Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?
seems like a fallacy of equivocation to me, or possibly a fallacy of composition. It feels odd to me to say that a brain is green—after all, they don’t look green when you’re cutting open a skull to see what’s inside of it. If “green” in Mitchell-Porter-English means the same thing as “experiences the sensation of greenness” does in CronoDAS-English, then yes, I’ll definitely say that the set of particular physical entities in question possesses the property “green”, even though the same can’t be said of the individual point-particles which make up that collection.
(This kind of word-wrangling is another reason why I tried to stay out of this discussion in the past… trying to make sure we mean the same thing when we talk to each other can take a lot of effort.)
I’m used to using the word “green” to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system)
But you would have been using the word “green” before you knew about wavelengths of light, or had the idea that your experiences were somehow the product of your brain. Green originally denotes a very basic phenomenon, a type of color. As a child you may have been a “naive realist”, thinking that what you see is the world itself. Now you think of your experience as something in your brain, with causes outside the brain. But the experience itself has not changed. In particular, green things are still actually green, even if they are now understood as “part of an experience that is inside one’s brain” rather than “part of the world outside one’s body”.
“Interpretation” is too abstract a word to describe something as concrete as color. It provides yet another way to dodge the reality of color itself. You don’t say that the act of falling over is an “interpretation” of being in the Earth’s gravitation field. The green experiences are green, they’re not just “interpreted as green”.
It feels odd to me to say that a brain is green—after all, they don’t look green when you’re cutting open a skull to see what’s inside of it.
Since we are assuming that our experiences are parts of our brains, this would be the wrong way to think about it anyway. Your experience of anything, including cutting open someone else’s skull, is supposed to be an object inside your own brain, and any properties of that experience are properties of part of your own brain. You won’t see the color in another brain by looking at it. But somehow, you see the color in your own brain by being it.
If “green” in Mitchell-Porter-English means the same thing as “experiences the sensation of greenness” does in CronoDAS-English
The latter expression again pushes away the real issue—is there such a thing as actual greenness or not. We earlier had some quotes from an Australian philosopher, JJC Smart, who would say there are “experiences of green”, but there’s no actual green. He says this because he’s a materialist, so he believes that all there is in reality is just neurons doing their thing, and he knows that standard physical ontology doesn’t contain anything like actual green. He has to deny the reality of one of the most obviously real things there is, but, at least he takes a stand.
On the other hand, someone else who talks about “experiences of green” might decide that what they mean is exactly the same thing as they would have meant by green, when they were a child and a direct realist. Talking about experience in this case is just a way to emphasize the adult understanding of what it is that one directly experiences—parts of your own brain, rather than objects outside it. But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can’t be, because physics is true and physics contains no such thing as “actual green”?
Lot of words there… I hope I’m understanding better.
But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can’t be, because physics is true and physics contains no such thing as “actual green”?
This is what I’ve been trying to say: “Green” exists, and “green” is also present (indirectly) in physics. (I think.)
Not one of the fundamental properties, but definable in terms of them.
In other words, present in the same way “diamond” is—there’s no property “green” in the fundamental equations of physics, but it “emerges” from them, or can (in principle) be defined in terms of them. (I’m embarrassed to use the word “emergent”, but, well...)
To use an analogy, there’s no mention of “even numbers” in the axioms of Peano Arithmetic or in first order logic, but S(S(0)) is still even; evenness is present indirectly within Peano Arithmetic. You can talk about even numbers within Peano Arithmetic by writing a formula fragment that is true of all even numbers and false for all other numbers, and using that as your “definition” of even. (It would be something like “Ǝy(S(S(0))y) = x)”.) If I understand correctly, “standard physical ontology” is also a formal system, so the exact same trick should work for talking about concepts such as “diamond” or “green”—we just don’t happen to know (yet) how to define “green” the same way we can define “diamond” or “even”, but I’m pretty sure that, in principle, there is a way to do it.
is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue.
Let’s compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic. I think the two cases are quite similar. In both cases you have an infinite tower of increasingly complex conjunctive (etc) properties that can be defined in terms of an ontological base, but getting to color just from arithmetic or just from points arranged in space is asking for magic. (Whereas getting a diamond from points arranged in space is not problematic.)
There are quantifiable things you can say about subjective color, for example its three-dimensionality (hue, saturation, brightness). The color state of a visual region can be represented by a mapping from the region (as a two-dimensional set of points) into three-dimensional color space. So there ought to be a sense in which the actually colored parts of experience are instances of certain maps which are roughly of the form R^2 → R^3. (To be more precise, the range and domain will be certain subsets of R^2 and R^3.) But this doesn’t mean that a color experience can be identified with this mathematical object, or with a structurally isomorphic computational state.
You could say that my “methodology”, in attempting to construct a physical ontology that contains consciousness, is to discover as much as I can about the structure and constituent relations of a conscious experience, and then to insist that these are realized in the states of a physically elementary “state machine” rather than a virtual machine, because that allows me to be a realist about the “parts” of consciousness, and their properties.
Let’s compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic.
In one sense, there already is a demonstration that you can get colors from the combinations of the elementary properties in standard physical ontology: you can specify a brain in standard physical ontology. And, heck, maybe you can get colors out of Peano Arithmetic, too! ;)
At this point we have at least identified what we disagree on. I suspect that there is nothing more we can say about the topic that will affect each other’s opinion, so I’m going to withdraw from the discussion.
Dualism is a confused notion. If, in a long journey through gathering a tremendous degree of knowledge, you arrive at dualism, you’ve made a mistake somewhere and need to go back and see where you divided by zero. If your logical chain is in fact sound to a mathematical degree of certainty, then arriving at dualism is a reductio ad absurdum of your starting point.
I fail to see what your actual position is. Mine is, first, that colors exist, and second, that they don’t exist in standard physical ontology. Please make a comparably clear statement about what you believe the truth to be.
Colours “exist” as a fact of perception. If you’re looking for colours without perception, you’ve missed what normative usage of “colour” means. You’ve also committed a ton of compression fallacy, assuming that all possible definitions of “colour” do or should refer to the same ontological entity.
You’ve then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you’ve wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.
You need to seriously consider the possibility that this sequence is getting such an overwhelmingly negative reaction because you’re talking rubbish.
Why do you put “exist” in quotation marks? What does that accomplish? If I chopped off your hand, would you say that the pain does not exist, it only “exists”?
If you’re looking for colours without perception, you’ve missed what normative usage of “colour” means.
I’m not looking for colors without perception; I’m looking for the colors of perception somewhere in physical reality; since colors are real, and physical reality is supposed to be the only sort of reality there is.
You’ve then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you’ve wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.
It’s not so easy to describe conscious states accurately, and a serious alternative to dualism isn’t so easy to invent or convey either. I’m improvising a lot. If you make an effort to understand it, it may make more sense.
But let us return to your views. Colors only exist as part of perceptions; fine. Presumably you believe that a perception is a type of physical process, a brain process. Do you believe that some part of these brain processes is colored? If someone is seeing green, is there a flicker of actual greenness somewhere in or around the relevant brain process? I doubt that you think this. But then, at this point, nothing in your model of reality is actually green, neither the world outside the brain, nor the world inside the brain. Yet greenness is manifestly there in reality: perceptions contain actual greenness. Therefore your model is incomplete. Therefore, if you wish to include actual conscious experiences in your model, they’ll have to go in alongside but distinct from the physical processes. Therefore, you will have to be a dualist.
I am not advocating dualism, I’m just telling you that if you don’t want to deny the phenomenology of color, and you want to retain your physical ontology, you will have to be a dualist.
Mostly irrelevant to the OP, a question: how implausible do you see a claim that dualism is false (there’s nothing irreducible in material models of our minds) and (at the same time) qualia (or phenomena as in constructs from qualia) are ontologically basic? (and, ergo, materialism i.e. material model is not ontologically basic).
I don’t know. Probably very low, certainly less than 1%.
Asserting that qualia are ontologically basic appears to be assuming that an aspect of mind is ontologically basic, i.e. dualism. So it’s only not having done the logical chain myself that would let me set a a probability (a statement of my uncertainty) on it at all, rather than just saying “contradiction”.
There are a lot of reasons that people aren’t responding positively to your comments. One of which I think hasn’t been addressed is that this to a large extent pattern matches to a bad set of metapatterns in history. In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible. So they look at this and think that it isn’t a promising line of inquiry. Now, this may be unfair, but I don’t think it really is very unfair. The notion that there are irreducible or even reducible but strongly dualist aspects of our universe seems to a class of hypotheses which has been repeatedly falsified. So it is fair for someone to by default to assign a low probability to similar hypotheses.
You have other bits that make worrying signals about your rationality or level intentions, like when you write things like:
This bit not only made me sit up in alarm, it substantially reduced how seriously I should take your ideas. Previously, my thought process was “This seems wrong, but Porter seems to know a decent amount of physics, more than I do in some aspects, maybe I should update more to taking this sort of hypothesis more seriously?” Although Penrose has already done that, so you wouldn’t cause that big an update, this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line. This isn’t as extreme as say Jonathan Wells) who got a PhD in biology so he could “destroy Darwinism” but it does seem similar. The primary difference is that Wells seemed interested in the degree for its rhetorical power, whereas you seem genuinely interested in actually working out the truth. But to a casual observer who just read this post, they’d see a very strong match here.
I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don’t have the social capital/status here to get away with it. Eliezer gets away with it even from the people here who don’t consider the Singularity Institute to be a great way of fighting existential risk, because it is hard to have higher status than being the website’s founder (although lukeprog and Yvain might be managing to beat that in some respects). In this context, making a point about how you just want loans at some level reduces status even further. One thing that you may want to consider is looking for other similar sources of funding that are broader and don’t have the same underlying status system. Kickstarter would be an obvious one.
Sometimes progress consists of doubling back to an older attitude, but at a higher level. Revolutions have excesses. The ghost in the machine haunts us, the more we take the machine apart. I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.
Or it’s like learning anatomy, physiology, and genetics, so you can cure a disease. Certainly my thinking about physics has a much higher level of concreteness now, because I have much more to work with, and I have new ideas about details—maybe it’s complexes of twistor polytopes, rather than evolving tensor networks. But I’ve found no reason to question the original impetus.
I believe most of the downvotes are coming because of the claims I make (about what might be true and what can’t be true) - I get downvotes whenever I say this stuff. Also because it’s written informally rather than like a scholarly argumentative article (that’s due to writing it all in a rush), and it contains statements to the effect that “too many of you just don’t get it”. Talking about money is just the final straw, I think.
But actually I think it’s going OK. There’s communication happening, issues are being aired and resolved, and there will have been progress, one way or another, by the time the smoke clears.
However, I do want to say that this comment of yours was not bad as an exercise in dispassionate analysis of what causes might be at work in the situation.
One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis. I mean sentences like this:
As a philosopher myself, I appreciate the usefulness of jargon from time to time, but you sometimes have the air of throwing it around for the sheer joy of it. Furthermore, I (at least) find that that sort of style can sometimes feel like you’re deliberately trying to obscure your point, or that it’s camoflage to conceal any dubious parts.
When someone’s spent years on a personal esoteric search for meaning, word salad is a really bad sign.
What he said.
I have difficulty understand what Mitchell Porter is trying to say when he talks about this topic. When I run into something that is difficult to understand in this manner, I usually find that, upon closer examination, it usually turns out that I didn’t understand it because it doesn’t make any sense in the first place. And, as far as I can tell, this is also true of what Mitchell Porter, too.
I claim that colors obviously exist, because they are all around us, and I also claim that they do not exist in standard physical ontology. Is that much clear?
Now it is.
I disagree that colors do not exist in standard physical ontology, and find the claim rather absurd on its face. (I’m not entirely sure what ontology is, but I think I’ve picked up the meaning from context.)
See:
Brain Breakthrough! It’s Made of Neurons!
Hand vs. Fingers
Angry Atoms
I don’t know every last detail of how the experience of color is created by the interaction of of light waves, eyes, and neurons, but I know that that’s where it comes from.
An ontology is a theory about what it is that exists. I have to speak of “physical ontology” and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn’t have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.
So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don’t have color, sound, or any of those “feels” which get filed under qualia.
Asking “where is the experienced color in the physical brain?” shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.
For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location—the property of always being at some point in space—and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn’t actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles—e.g. “location of center of mass” or “having a part at location x0 and another part at x1″. We can even extend to counterfactual properties, e.g. “the property of flying apart if a heavy third particle were to fly past on a certain trajectory”.
To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that’s a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional “physical” predicates that can also enter into the definition.
I presume that most modern people don’t consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no “minds” or “psychological states”, there are just bodies that speak. It’s the classic case of accounting for experience by only accounting for what people say about experience.
Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.
But I claim they can’t be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.
Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience—it is the experience—and you say that these neuronal sheets which behave like the parts of an experience still aren’t the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).
There are two peculiarities to this second option. First, haven’t we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can’t be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn’t it sound like a soul—something that’s not a network of neurons, but a single thing; the single place where the whole experience is localized?
I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there’s no reason why the elementary internal properties of an entangled system can’t literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity—not the structure of a collective behavior (as when we talk about the state of a neuron as “firing” or “not firing”, a description which passes over the intricate microscopic detail of the exact detailed state).
The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but “fundamental”, in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it’s not a virtual machine.
Maybe that’s the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it’s a state machine at the “bottom level of implementation”. If consciousness is a virtual state machine—so I argue—then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don’t intrinsically have.
If you are just making a causal model of something, there’s no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you’re modeling. But consciousness isn’t just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can’t just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it’s the “Cartesian theater” where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.
(That is, the experience of willing is awareness of a certain type of causality taking place. I’m not saying that the will is a quale; the will is just the self in its causal role, and there are “qualia of the will” which constitute the experience of having a will, and they result from reflective awareness of the self’s causal role and causal power… Or at least, these are my private speculations. )
I’ll guess that my prose got a little difficult again towards the end, but that’s how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.
Well, it sounds quite reasonable to me to say that if you arrange elementary particles in a certain, complicated way, you get an instance of something that experiences greenness. To me, this is no different than saying that that if you arrange particles in a certain, complicated way, you get a diamond. We just happen to know a lot more about what particle configurations create “diamondness” than “experience of green”ness. (As a matter of fact, we know exactly how to define “diamondness” as a function of particle type and arrangement.)
So, at this point I apply the Socratic method...
Are we in agreement that a “diamond” is a thing that exists? (My answer: Yes—we can recognize diamonds when we see them.)
Is the property “is a diamond” one that can be defined in terms of “quantitative and logical conjunctions of the properties of individual particles”? (My answer: Yes, because we know that diamonds are made of carbon atoms arranged in a specific pattern.)
Hopefully we agree on these answers! And if we do, can you tell me what the difference is between the predicate “is experiencing greenness” and “is a diamond” such that we can tell, in the real world, if something is a diamond by looking at the particles that make it up, and that it is impossible, in principle, to do the same for “is experiencing greenness”?
What I think your mistake is, is that you underestimate the scope of just what “quantitative and logical conjunctions of the properties of individual particles” can actually describe. Which is, literally, anything at all that can be described with mathematics, assuming you’re allowing all the standard operators of predicate logic and of arithmetic. And that would include the function that maps “arrangements of particles” as an input and returns “true” if the arrangement of particles included a brain that was experiencing green and “false” otherwise—even though we humans don’t actually know what that function is!
To sum up, I assert that you are mistaken when you say that there is is an ontological mismatch—the sheet of neurons does indeed contain the experience of green. You are literally making the exact same error that Eliezer’s strawman makes in Angry Atoms.
And if you don’t know how to create greenness, it is an act of faith on your part that it is done by phsyics as you understand it at all.
Perhaps, but physics has had a pretty good run so far...
The key phrase is “as you understand it”. 19th century physics doesn’t explain whatever device you wrote that on.
By talking about “experience of green”, “experiencing greenness”, etc, you get to dodge the question of whether greenness itself is there or not. Do you agree that there is something in reality that is actually green, namely, certain parts of experiences? Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?
No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn’t illusion?
Well, I’m used to using the word “green” to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system) and not experiences. As in, “This apple is green” or “I see something that looks green.” Which is why I used the expression “experience of greenness”, because that’s the best translation I can think of for what you’re saying into CronoDAS-English.
So the question
seems like a fallacy of equivocation to me, or possibly a fallacy of composition. It feels odd to me to say that a brain is green—after all, they don’t look green when you’re cutting open a skull to see what’s inside of it. If “green” in Mitchell-Porter-English means the same thing as “experiences the sensation of greenness” does in CronoDAS-English, then yes, I’ll definitely say that the set of particular physical entities in question possesses the property “green”, even though the same can’t be said of the individual point-particles which make up that collection.
(This kind of word-wrangling is another reason why I tried to stay out of this discussion in the past… trying to make sure we mean the same thing when we talk to each other can take a lot of effort.)
But you would have been using the word “green” before you knew about wavelengths of light, or had the idea that your experiences were somehow the product of your brain. Green originally denotes a very basic phenomenon, a type of color. As a child you may have been a “naive realist”, thinking that what you see is the world itself. Now you think of your experience as something in your brain, with causes outside the brain. But the experience itself has not changed. In particular, green things are still actually green, even if they are now understood as “part of an experience that is inside one’s brain” rather than “part of the world outside one’s body”.
“Interpretation” is too abstract a word to describe something as concrete as color. It provides yet another way to dodge the reality of color itself. You don’t say that the act of falling over is an “interpretation” of being in the Earth’s gravitation field. The green experiences are green, they’re not just “interpreted as green”.
Since we are assuming that our experiences are parts of our brains, this would be the wrong way to think about it anyway. Your experience of anything, including cutting open someone else’s skull, is supposed to be an object inside your own brain, and any properties of that experience are properties of part of your own brain. You won’t see the color in another brain by looking at it. But somehow, you see the color in your own brain by being it.
The latter expression again pushes away the real issue—is there such a thing as actual greenness or not. We earlier had some quotes from an Australian philosopher, JJC Smart, who would say there are “experiences of green”, but there’s no actual green. He says this because he’s a materialist, so he believes that all there is in reality is just neurons doing their thing, and he knows that standard physical ontology doesn’t contain anything like actual green. He has to deny the reality of one of the most obviously real things there is, but, at least he takes a stand.
On the other hand, someone else who talks about “experiences of green” might decide that what they mean is exactly the same thing as they would have meant by green, when they were a child and a direct realist. Talking about experience in this case is just a way to emphasize the adult understanding of what it is that one directly experiences—parts of your own brain, rather than objects outside it. But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can’t be, because physics is true and physics contains no such thing as “actual green”?
Lot of words there… I hope I’m understanding better.
This is what I’ve been trying to say: “Green” exists, and “green” is also present (indirectly) in physics. (I think.)
What does “present indirectly” mean?
Not one of the fundamental properties, but definable in terms of them.
In other words, present in the same way “diamond” is—there’s no property “green” in the fundamental equations of physics, but it “emerges” from them, or can (in principle) be defined in terms of them. (I’m embarrassed to use the word “emergent”, but, well...)
To use an analogy, there’s no mention of “even numbers” in the axioms of Peano Arithmetic or in first order logic, but S(S(0)) is still even; evenness is present indirectly within Peano Arithmetic. You can talk about even numbers within Peano Arithmetic by writing a formula fragment that is true of all even numbers and false for all other numbers, and using that as your “definition” of even. (It would be something like “Ǝy(S(S(0))y) = x)”.) If I understand correctly, “standard physical ontology” is also a formal system, so the exact same trick should work for talking about concepts such as “diamond” or “green”—we just don’t happen to know (yet) how to define “green” the same way we can define “diamond” or “even”, but I’m pretty sure that, in principle, there is a way to do it.
(I hope that made sense...)
Here I fall back on my earlier statement that this
Let’s compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic. I think the two cases are quite similar. In both cases you have an infinite tower of increasingly complex conjunctive (etc) properties that can be defined in terms of an ontological base, but getting to color just from arithmetic or just from points arranged in space is asking for magic. (Whereas getting a diamond from points arranged in space is not problematic.)
There are quantifiable things you can say about subjective color, for example its three-dimensionality (hue, saturation, brightness). The color state of a visual region can be represented by a mapping from the region (as a two-dimensional set of points) into three-dimensional color space. So there ought to be a sense in which the actually colored parts of experience are instances of certain maps which are roughly of the form R^2 → R^3. (To be more precise, the range and domain will be certain subsets of R^2 and R^3.) But this doesn’t mean that a color experience can be identified with this mathematical object, or with a structurally isomorphic computational state.
You could say that my “methodology”, in attempting to construct a physical ontology that contains consciousness, is to discover as much as I can about the structure and constituent relations of a conscious experience, and then to insist that these are realized in the states of a physically elementary “state machine” rather than a virtual machine, because that allows me to be a realist about the “parts” of consciousness, and their properties.
In one sense, there already is a demonstration that you can get colors from the combinations of the elementary properties in standard physical ontology: you can specify a brain in standard physical ontology. And, heck, maybe you can get colors out of Peano Arithmetic, too! ;)
At this point we have at least identified what we disagree on. I suspect that there is nothing more we can say about the topic that will affect each other’s opinion, so I’m going to withdraw from the discussion.
Dualism is a confused notion. If, in a long journey through gathering a tremendous degree of knowledge, you arrive at dualism, you’ve made a mistake somewhere and need to go back and see where you divided by zero. If your logical chain is in fact sound to a mathematical degree of certainty, then arriving at dualism is a reductio ad absurdum of your starting point.
Perhaps you missed that I have argued against functionalism because it implies dualism.
Then you need to do the same for ontologically basic qualia.
I fail to see what your actual position is. Mine is, first, that colors exist, and second, that they don’t exist in standard physical ontology. Please make a comparably clear statement about what you believe the truth to be.
Colours “exist” as a fact of perception. If you’re looking for colours without perception, you’ve missed what normative usage of “colour” means. You’ve also committed a ton of compression fallacy, assuming that all possible definitions of “colour” do or should refer to the same ontological entity.
You’ve then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you’ve wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.
You need to seriously consider the possibility that this sequence is getting such an overwhelmingly negative reaction because you’re talking rubbish.
Why do you put “exist” in quotation marks? What does that accomplish? If I chopped off your hand, would you say that the pain does not exist, it only “exists”?
I’m not looking for colors without perception; I’m looking for the colors of perception somewhere in physical reality; since colors are real, and physical reality is supposed to be the only sort of reality there is.
It’s not so easy to describe conscious states accurately, and a serious alternative to dualism isn’t so easy to invent or convey either. I’m improvising a lot. If you make an effort to understand it, it may make more sense.
But let us return to your views. Colors only exist as part of perceptions; fine. Presumably you believe that a perception is a type of physical process, a brain process. Do you believe that some part of these brain processes is colored? If someone is seeing green, is there a flicker of actual greenness somewhere in or around the relevant brain process? I doubt that you think this. But then, at this point, nothing in your model of reality is actually green, neither the world outside the brain, nor the world inside the brain. Yet greenness is manifestly there in reality: perceptions contain actual greenness. Therefore your model is incomplete. Therefore, if you wish to include actual conscious experiences in your model, they’ll have to go in alongside but distinct from the physical processes. Therefore, you will have to be a dualist.
I am not advocating dualism, I’m just telling you that if you don’t want to deny the phenomenology of color, and you want to retain your physical ontology, you will have to be a dualist.
-
I don’t know. Probably very low, certainly less than 1%.
-
Asserting that qualia are ontologically basic appears to be assuming that an aspect of mind is ontologically basic, i.e. dualism. So it’s only not having done the logical chain myself that would let me set a a probability (a statement of my uncertainty) on it at all, rather than just saying “contradiction”.
-