Three rocks in a field aren’t a triangle until there’s a brain with a concept of ‘triangle’ that identifies them as such. Photons of a particular wavelength aren’t red until there’s a brain with a concept of ‘red’ that identifies them as such. A creature isn’t conscious until there’s a brain with a concept of ‘consciousness’ that identifies it as such.
Third one’s tricky because of the self-reference, but that doesn’t make it an exception to the general rule. Concepts are predictive models, a model can’t make predictions unless it’s running on a computer, brains are the one kind of computer that can be mass produced by unskilled labor. Qualia, to the extent that they can be coherently defined at all, are a matter of software. Software can be translated between hardware platforms, but cannot exist in any useful form in the absence of hardware.
And, for the record, the math necessary to fully define a rock is a hell of a lot more complicated than “1+1.” Don’t dismiss it until you’ve properly studied it.
It’s not just tricky, it’s self-contradictory. The mind exists only in the mind, you say?
Concepts are predictive models … brains are [computers] … Qualia … are a matter of software
If you really want to try reducing all of this to physics, I’d recommend that you first deliberately try to dispense with terms which have a technological or user-semantic connotation, because no such thing exists in physical ontology. “Computer” and “software” are being used as metaphors here, and a “model” is an intentional concept. Computer science has the concept of a “state machine”, which is a little better from a physical standpoint, because it doesn’t attach any semantics to the “states”.
OK, fine, you can do such a translation, and you get e.g. qualia are equivalence classes of state machines. At least your claim has now truly been expressed in terms that do not implicitly exceed physical ontology. But it’s still a wrong claim, because it says nothing about the properties that really define qualia, like the “” that we’ve been talking about in another thread.
the math necessary to fully define a rock is a hell of a lot more complicated than “1+1.” Don’t dismiss it until you’ve properly studied it.
I don’t study rocks, but I study physics every day. I know the mathematics is complicated. What I’m saying is that physics is not mathematics.
it says nothing about the properties that really define qualia, like the “” that we’ve been talking about in another thread
So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn’t have anything to do with the referent of ‘redness’. It looks like your obvious premise that redness isn’t reducible implies epiphenomenalism. Which is absurd, obviously.
Edit: Wow, you (nearly) bite the bullet in this comment! You say:
Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they’re a cause of this comment. I claim that the word ‘cause’ can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation.
So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?
So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn’t have anything to do with the referent of ‘redness’. It looks like your obvious premise that redness isn’t reducible implies epiphenomenalism. Which is absurd, obviously.
No, it just means that plays a causal role in us, which would be played by something else in a simulation of us.
There’s nothing paradoxical about the idea of an unconscious simulation of consciousness. It might be an ominous or a disconcerting idea, but there’s no contradiction.
I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they’re a cause of this comment. I claim that the word ‘cause’ can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation.
So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?
See what I just said to William Sawin about fundamental versus derived causality. These are derived causal relations; really, they are regularities which follow indirectly from large numbers of genuine causal relations. My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity—a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be.
Being a single entity means that it can enter directly into whatever fundamental causal relations are responsible for physical dynamics. Being that entity, from the inside, means having the sensations, thoughts, and desires that you do have; described mathematically, that will mean that you are an entity in a particular complicated, formally specified state; and physically, the immediate interactions of that entity would be with neighboring parts of the brain. These interactions cause the qualia, and they convey the “will”.
That may sound strange, but even if you believe in a mind that is material but non-fundamental, it still has to work like that or else it is causally irrelevant. So when you judge the idea, remember to check whether you’re rejecting it for weirdness that your own beliefs already implicitly carry.
My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity—a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be.
So you’re taking the existing causal graph, drawing a box around all the interactions that happen inside a brain, and saying that everything inside the box counts as one thing.
That’s not simplification, it’s just bad accountancy.
I’m saying that a brain is an environment where ideas can do interesting things (like reproducing themselves, mutating, splitting and recombining) comparable to the interesting things that started happening a very long time ago between amino acids and phospholipid membranes and assorted other organic chemicals which eventually resulted in the formation of brains. Any Turing-complete computer is also a sort of environment for ideas.
An idea outside an environment capable of supporting it does not do interesting things. It might be dormant, like a virus or bacterial spore, and colonize any less-hostile environment to which it’s introduced. It might not. As yet, the only reliable way to distinguish between a dormant idea and a different arrangement of the same parts which does not constitute a dormant idea is to find an environment in which it will do interesting things.
For example, if you find a piece of baked clay with some scratch-marks in it, and want to know if they’re cuneiform or just random scratches, you could show it to an archaeologist. The archaeologist looks at the tablet and compares it to prior knowledge about cuneiform—that is to say, transfers information about shape and coloration into her brain via the optic nerve and, once inside, drops them into the informational equivalent of a dish of agar. If anything interesting pops up, it’s an idea. If not, either it’s just noise, or it’s an idea that the archaeologist can’t figure out. There’s no way to definitively prove the absence of potential ideas in a given information-bearing substrate.
If these disembodied qualia-properties don’t help you make any actionable predictions beyond what physicalism could do, and their presence is unfalsifiable, I can’t see any point to this debate. Is it a social-signaling contest of some sort?
A creature isn’t conscious until there’s a brain with a concept of ‘consciousness’ that identifies it as such.
OK, so according to you, we have concepts existing before and independently of consciousness, and we also have that consciousness is not a property that is objectively present (or else there’d be no need to appeal to the conceptual judgement of a brain, as a necessary cause of consciousness’s existence). Both of these have to be true if you are to avoid circularity.
The second one already falsifies your account of consciousness. The difference between being conscious and not being conscious is not a matter of convention. It’s an internal fact about you which is not affected by whether I am around to express opinions.
It sounds like you want the consciousness of a brain to depend on the conceptual judgements of that same brain, which is at least less abjectly dependent on the epistemology of outsiders. But it’s still false. If you are conscious, you are conscious regardless of whatever opinions or concepts you have. Your conceptual capacities limit your possible conscious experience, in the sense that you can’t consciously identify something as an X if you don’t have the concept X, but whether or not you’re conscious doesn’t depend on how you are using (or misusing) your conceptual faculties at any time.
Just to clarify, by consciousness I mean awareness in all forms, not just self-awareness. What I said still applies to self-awareness as well as to awareness in general, but I thought I would make explicit that I’m not just talking about the sense of being a self. Even raw, self-oblivious sensory experience is a form of consciousness.
If these disembodied qualia-properties don’t help you make any actionable predictions beyond what physicalism could do, and their presence is unfalsifiable, I can’t see any point to this debate. Is it a social-signaling contest of some sort?
Maybe my very latest comments will clear things up a little. The immediate problem with physicalism is that reality contains qualia and physicalism doesn’t. In a reformed physicalism that does contain qualia, they would have causal power.
Just to clarify, by consciousness I mean awareness in all forms, not just self-awareness. What I said still applies to self-awareness as well as to awareness in general, but I thought I would make explicit that I’m not just talking about the sense of being a self.
Ah, so we’re arguing over definitions.
The immediate problem with physicalism is that reality contains qualia and physicalism doesn’t. In a reformed physicalism that does contain qualia, they would have causal power.
Let’s say you take an organism capable of receiving and interpreting information in the form of light, such as e.g. a ferret with working eyes and a visual cortex. Duplicate it with arbitrary precision, keep one of the copies in a totally lightless box for a few minutes and shine a dazzling but nondamaging spotlight on the other for the same period of time. Then open the box, shut off the spotlight, and show them both a picture.
The ferret from the box would see blindingly intense light, gradually fading in to the picture, which would seem bright and vivid. The ferret from the spotlight would see near-total darkness, gradually fading in to the picture, which would seem dull and blurry. Same picture, very different subjective experience, but it’s all the result of physiological (mostly neurological) processes that can be adequately explained by physicalism.
Does the theory of qualia make independently-verifiable predictions that physicalism cannot? Or, if the predictions are the same, is it somehow simpler to describe mathematically? In the absence of either of those conditions, I am forced to consider the theory of qualia needlessly complex.
Three rocks in a field aren’t a triangle until there’s a brain with a concept of ‘triangle’ that identifies them as such. Photons of a particular wavelength aren’t red until there’s a brain with a concept of ‘red’ that identifies them as such. A creature isn’t conscious until there’s a brain with a concept of ‘consciousness’ that identifies it as such.
Third one’s tricky because of the self-reference, but that doesn’t make it an exception to the general rule. Concepts are predictive models, a model can’t make predictions unless it’s running on a computer, brains are the one kind of computer that can be mass produced by unskilled labor. Qualia, to the extent that they can be coherently defined at all, are a matter of software. Software can be translated between hardware platforms, but cannot exist in any useful form in the absence of hardware.
And, for the record, the math necessary to fully define a rock is a hell of a lot more complicated than “1+1.” Don’t dismiss it until you’ve properly studied it.
It’s not just tricky, it’s self-contradictory. The mind exists only in the mind, you say?
If you really want to try reducing all of this to physics, I’d recommend that you first deliberately try to dispense with terms which have a technological or user-semantic connotation, because no such thing exists in physical ontology. “Computer” and “software” are being used as metaphors here, and a “model” is an intentional concept. Computer science has the concept of a “state machine”, which is a little better from a physical standpoint, because it doesn’t attach any semantics to the “states”.
OK, fine, you can do such a translation, and you get e.g. qualia are equivalence classes of state machines. At least your claim has now truly been expressed in terms that do not implicitly exceed physical ontology. But it’s still a wrong claim, because it says nothing about the properties that really define qualia, like the “” that we’ve been talking about in another thread.
I don’t study rocks, but I study physics every day. I know the mathematics is complicated. What I’m saying is that physics is not mathematics.
So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn’t have anything to do with the referent of ‘redness’. It looks like your obvious premise that redness isn’t reducible implies epiphenomenalism. Which is absurd, obviously.
Edit: Wow, you (nearly) bite the bullet in this comment! You say:
I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they’re a cause of this comment. I claim that the word ‘cause’ can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation.
So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?
No, it just means that plays a causal role in us, which would be played by something else in a simulation of us.
There’s nothing paradoxical about the idea of an unconscious simulation of consciousness. It might be an ominous or a disconcerting idea, but there’s no contradiction.
See what I just said to William Sawin about fundamental versus derived causality. These are derived causal relations; really, they are regularities which follow indirectly from large numbers of genuine causal relations. My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity—a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be.
Being a single entity means that it can enter directly into whatever fundamental causal relations are responsible for physical dynamics. Being that entity, from the inside, means having the sensations, thoughts, and desires that you do have; described mathematically, that will mean that you are an entity in a particular complicated, formally specified state; and physically, the immediate interactions of that entity would be with neighboring parts of the brain. These interactions cause the qualia, and they convey the “will”.
That may sound strange, but even if you believe in a mind that is material but non-fundamental, it still has to work like that or else it is causally irrelevant. So when you judge the idea, remember to check whether you’re rejecting it for weirdness that your own beliefs already implicitly carry.
So you’re taking the existing causal graph, drawing a box around all the interactions that happen inside a brain, and saying that everything inside the box counts as one thing.
That’s not simplification, it’s just bad accountancy.
Where else would it be?
I’m saying that a brain is an environment where ideas can do interesting things (like reproducing themselves, mutating, splitting and recombining) comparable to the interesting things that started happening a very long time ago between amino acids and phospholipid membranes and assorted other organic chemicals which eventually resulted in the formation of brains. Any Turing-complete computer is also a sort of environment for ideas.
An idea outside an environment capable of supporting it does not do interesting things. It might be dormant, like a virus or bacterial spore, and colonize any less-hostile environment to which it’s introduced. It might not. As yet, the only reliable way to distinguish between a dormant idea and a different arrangement of the same parts which does not constitute a dormant idea is to find an environment in which it will do interesting things.
For example, if you find a piece of baked clay with some scratch-marks in it, and want to know if they’re cuneiform or just random scratches, you could show it to an archaeologist. The archaeologist looks at the tablet and compares it to prior knowledge about cuneiform—that is to say, transfers information about shape and coloration into her brain via the optic nerve and, once inside, drops them into the informational equivalent of a dish of agar. If anything interesting pops up, it’s an idea. If not, either it’s just noise, or it’s an idea that the archaeologist can’t figure out. There’s no way to definitively prove the absence of potential ideas in a given information-bearing substrate.
If these disembodied qualia-properties don’t help you make any actionable predictions beyond what physicalism could do, and their presence is unfalsifiable, I can’t see any point to this debate. Is it a social-signaling contest of some sort?
Let’s go back to your original statement:
OK, so according to you, we have concepts existing before and independently of consciousness, and we also have that consciousness is not a property that is objectively present (or else there’d be no need to appeal to the conceptual judgement of a brain, as a necessary cause of consciousness’s existence). Both of these have to be true if you are to avoid circularity.
The second one already falsifies your account of consciousness. The difference between being conscious and not being conscious is not a matter of convention. It’s an internal fact about you which is not affected by whether I am around to express opinions.
It sounds like you want the consciousness of a brain to depend on the conceptual judgements of that same brain, which is at least less abjectly dependent on the epistemology of outsiders. But it’s still false. If you are conscious, you are conscious regardless of whatever opinions or concepts you have. Your conceptual capacities limit your possible conscious experience, in the sense that you can’t consciously identify something as an X if you don’t have the concept X, but whether or not you’re conscious doesn’t depend on how you are using (or misusing) your conceptual faculties at any time.
Just to clarify, by consciousness I mean awareness in all forms, not just self-awareness. What I said still applies to self-awareness as well as to awareness in general, but I thought I would make explicit that I’m not just talking about the sense of being a self. Even raw, self-oblivious sensory experience is a form of consciousness.
Maybe my very latest comments will clear things up a little. The immediate problem with physicalism is that reality contains qualia and physicalism doesn’t. In a reformed physicalism that does contain qualia, they would have causal power.
Ah, so we’re arguing over definitions.
Let’s say you take an organism capable of receiving and interpreting information in the form of light, such as e.g. a ferret with working eyes and a visual cortex. Duplicate it with arbitrary precision, keep one of the copies in a totally lightless box for a few minutes and shine a dazzling but nondamaging spotlight on the other for the same period of time. Then open the box, shut off the spotlight, and show them both a picture.
The ferret from the box would see blindingly intense light, gradually fading in to the picture, which would seem bright and vivid. The ferret from the spotlight would see near-total darkness, gradually fading in to the picture, which would seem dull and blurry. Same picture, very different subjective experience, but it’s all the result of physiological (mostly neurological) processes that can be adequately explained by physicalism.
Does the theory of qualia make independently-verifiable predictions that physicalism cannot? Or, if the predictions are the same, is it somehow simpler to describe mathematically? In the absence of either of those conditions, I am forced to consider the theory of qualia needlessly complex.