I successfully referred to something with the phrase. I know I did because your response wasn’t “Huh? What does that word mean?”
That’s also true of things like the Christian Trinity and immortal souls and consciousness and acausal free will. All these words refer to things that are untestable and unobservable, or are described in internally inconsistent ways (logical impossibilities); those of them that could potentially exist, don’t exist as a matter of fact; and some of them are just meaningless strings of words.
The real referent in these cases is just the sum of everything people tend to say or feel about these supposed concepts.
I’m more that open to the suggestion that subject experience is a illusion or an error—but it is the constitutive feature of our existence. Curious people aren’t going to just stop talking about it without a very good reason. The burden is on those who don’t think it should be discussed, to explain why.
I certainly agree that experience exists—I know I have mine, everyone else says the same about themselves. But if we insist on treating it as purely subjective experience, then we’ll never be able to say anything about it, pretty much by definition. In my experience all those curious people are talking about badly-understood notions deriving from beliefs of body-mind duality. No matter how much we learn about objective experience, even if we can manipulate somebody’s experience in any way we like, people can still say they don’t understand subjective experience.
It’s easy to think that because we experience things (as a verb), there must be some subjective experience to talk about. But my position is that if we can’t formulate a question about subjective experience—a question that will make us behave differently depending on the answer—then there’s nothing to talk about. We can go searching for answers, but there’s no such thing as searching for a question. Are we supposed to one day think of a question and be enlightened? But that question exists in its own right, and can be answered if we ever think it’s important in the way that any question may be important. Meanwhile, if we have some kind of psychological drive to look for The Question, we might as well ignore that drive or look for a way to suppress it—just as we do with other unprofitable, un-fullfillable drives.
That’s my position, anyway...
Agreed. This is a good line of attack. Egan’s response in the FAQ is:
some people have suggested that a sequence of states could only experience consciousness if there was a genuine causal relationship between them. The whole point of the Dust Theory, though, is that there is nothing more to causality than the correlations between states.
I don’t really know where he is coming from. If that is “the point” of Dust theory I don’t see how he as made that argument. It looks to me like brains and genuine simulations are indeed causal but arbitrary patterns are not.
In a timeless view, causality is just (regular) correlation in spacetime, as Egan says. I’m not sure what you are saying, though.
I also asked if simulating one or just a few mental states, instead of the whole evolution of your mental state over time, created some kind of subjective experience? In that case, would it be morally wrong to keep a highly detailed scan of your brain taken when you were feeling sad?
That said, it isn’t obvious to me why causation would be necessary for consciousness. Say we simulate your brain and record the simulation. We then divide the recording into 100,000 pieces, scramble them, and put them back together. Then we play the recording. The Dust theory says that the recording will be conscious just in no way proceeding along the arrow of time the way we are. Is the recording still a causal system?
That’s entirely a matter of definition—the definitions of “consciousness” and of “causality”. You can define them any way you like, but what do you actually learn about reality from this?
This part of Dust theory strikes me as leading up to the conclusion that “there are many conscious states!” without defining what consciousness means, and so not actually saying anything.
Regarding the rest of the objection...
You don’t address my central claim: that the mapping of ‘mental states’ to the physical representation used is arbitrary. If I can build a physics-simulator as a state machine encoding complete rules of physical evolution, then by the principles of Turing equivalence, I can build some other machine that uses any mapping I like from physical states to numbers (each number representing the simulator’s state). Does it then generate subjective experience in “someone”, and is it morally significant, to feed that machine some given number—even the number 1 for instance—because I built the machine to represent torture with that number?
You say of this,
You definitely can’t just say of some set of patterns “These are Jack suffering” and make them that way.
But why not? I can choose the mapping as I like.
To be sure there is a mind and to specify a particular experience of me suffering I suspect you would have to actually simulate me suffering.
Do you mean that it’s difficult to get the information needed to build a machine that correctly maps the number 1 to your suffering?
I can do this, for instance, by observing you in a normal state—recording what your brain looks like—then applying my understanding of nervous systems pain signals to build a description of your brain suffering pain. It’s not difficult in principle.
Or do you mean I might simulate your brain suffering without thereby creating a suffering mind? Why? And why would this be anything more than a matter of arbitrary definition?
My logic isn’t invalid. I addition to their being mental states with relation K to your final mental state there are some such states where you are happy and others in which you experience eternal torture. That is also a consequence of the argument.
That means there are (infinitely) many entities you are, with many different experiences. And all of them are you. That sounds like an… unorthodox use of the word “you” :-)
The problem is that you have not defined what ‘you’ means in this context.
My position on this is basically that our concept of personhood confuses types and tokens because it was developed in a world where every person had only one token. The fact that our concept of personal identity isn’t equipped to deal with the Dust argument isn’t really a point against it.
Why call this hypothetical collection of persons “you” (or indeed “me”) if it contains many different persons and doesn’t match our existing use of the word “you”?
Clarifying: Is the “Subjective Dust Theory” different from some other Dust theory as you understand it? I’m trying to describe Egan’s Dust theory.
I coined the term “subjective Dust theory” to mean Dust theory as applied to me and my subjective experience (producing conclusions such as “I’m necessarily immortal”), as opposed to Dust theory applied to other minds.
Also, I agree. Based on our experiences we can conclude that we are not dust-minds.
Obligatory question: given what observations would you assign high probability to the possibility that you are a dust-mind? Why would you privilege it over competing theories, which include:
You, but not all other possible experiences, are being simulated by someone for some reason.
Your memory or other parts of your mind are faulty, making you falsely remember or experience a chaotic universe (or whatever it is you observed).
Someone is deliberately messing / has messed with your mind, result as above.
You are a Boltzmann brain. (Unlike a dust-mind, you live normally over time, it’s just that you and your surroundings were created out of chaos by chance.)
Arguably, each of these theories is much more specific than the idea that you’re a dust-mind. That is, Dust theory predicts that there will be (infinitely many?) versions of you, some of which are being simulated, others have faulty memory, yet others are having their minds messed with by aliens, and still others are genuine Boltzmann brains. So in the absence of evidence to choose one of these, we should stick with the most general applicable theory—Dust theory.
On the other hand, the classes of all simulations and of all Boltzmann-brains also include all dust-minds… (You can simulate a universe containing a dust mind, and a a dust mind’s states can come about by Boltzmann chance.) So it’s not conclusive.
I certainly agree that experience exists—I know I have mine, everyone else says the same about themselves.
I was actually going to remark in the original comment and my previous one that I thought “subjective experience” was redundant. I truly have no idea what non-subjective experience could possibly be. “Subjective experience” isn’t something that is contrasted from other kinds of experience. It isn’t my coinage, as far as I know it is a legacy term but helpful in that it combines ‘the subject’ with ‘experiencing’. If that makes you uncomfortable by all means replace every instance of ‘subjective experience’ with ‘experience’. I think you can safely do the same with ‘consciousness’ or ‘qualia’ but I imagine you don’t like those terms either.
In a timeless view, causality is just (regular) correlation in spacetime, as Egan says. I’m not sure what you are saying, though.
The mercury in my barometer always drops before a thunderstorm. My barometer has never caused a thunderstorm. Thus, I prefer a counterfactual theory of causation. If you think Egan is right then how is a dust mind different from “causal sequences of physical states, i.e. outright simulations”>
I also asked if simulating one or just a few mental states, instead of the whole evolution of your mental state over time, created some kind of subjective experience? In that case, would it be morally wrong to keep a highly detailed scan of your brain taken when you were feeling sad?
I think the argument requires that there be more than one mental state, though one can skip mental states. But lets say you had three detailed scans from a period of sadness. Whether or not it is immoral would depend on whether or not we distinguish identical and simultaneous copies of persons in our utility function. But if you do care about such copies then yeah, it wouldn’t be the nicest thing to do.
That’s entirely a matter of definition—the definitions of “consciousness” and of “causality”. You can define them any way you like, but what do you actually learn about reality from this? This part of Dust theory strikes me as leading up to the conclusion that “there are many conscious states!” without defining what consciousness means, and so not actually saying anything.
The concepts of ‘consciousness’ and ‘causality’ describe features of the way we relate to the external world. I would like a coherent picture of this relation. Cause and effect, in particular is a huge part of how we experience the world. How this concept relates to what is actually going on is a really interesting question. If a system needs to be the kind of system we recognize as a causal system in order to produce a subject that experiences the world that would be something interesting to know. Getting a really precise definition about what consciousness is would be really cool. I know there are a lot of people working on it but that isn’t me. I don’t at all think that one needs a really precise analytic definition of a concept in order to employ it or say meaningful things about it.
You don’t address my central claim: that the mapping of ‘mental states’ to the physical representation used is arbitrary.
I just meant that something can’t just be labelled “Jack suffering”. There has to actually be a set of patterns that represent it. The set of real numbers, for example, is sufficiently complex that if it is represented in the universe (say by an infinite number of particles) then according to the dust theory that representation includes a pattern that is Jack suffering. But it also includes a pattern that is Jack really happy. And it includes lots of other patterns. You saying “This is Jack suffering” doesn’t change that. The information is there even if you aren’t reading it that way./*
Now what you might be able to do is build a really well determined structure such that only the states of me suffering are represented. I don’t really know. If you can though, I’m inclined to say that what you end up being will just be a run-of-the-mill simulation of me suffering, something we’d all recognize as bad. The only way to determine that you’ve created a pattern that is exactly as you want it to be is to run a regular old person-simulation—I think.
But I might not be responding to your concern, I’m still pretty confused about what that is.
That means there are (infinitely) many entities you are, with many different experiences. And all of them are you. That sounds like an… unorthodox use of the word “you” :-)
Yeah, I put it more delicately in the undergraduate thesis proposal I just turned in. But yeah it is unorthodox. :-) But the alternative is to give up a coherent account of personal identity altogether, as far as I’m concerned.
Why call this hypothetical collection of persons “you” (or indeed “me”) if it contains many different persons and doesn’t match our existing use of the word “you”?
Long answer, I’ll come back to it.
Obligatory question: given what observations would you assign high probability to the possibility that you are a dust-mind? Why would you privilege it over competing theories.
I’m not sure we can distinguish skeptical hypotheses by empirical evidence—I’m pretty sure we can’t by definition. But we might find empirical evidence that alters our estimations of the premises of those positions (our understanding of entropy could change, our understanding of brain simulations will surely change once we actually succeed, etc. Also we might be able to distinguish them according to a priori criteria like parsimony.
That’s also true of things like the Christian Trinity and immortal souls and consciousness and acausal free will. All these words refer to things that are untestable and unobservable, or are described in internally inconsistent ways (logical impossibilities); those of them that could potentially exist, don’t exist as a matter of fact; and some of them are just meaningless strings of words.
The real referent in these cases is just the sum of everything people tend to say or feel about these supposed concepts.
I certainly agree that experience exists—I know I have mine, everyone else says the same about themselves. But if we insist on treating it as purely subjective experience, then we’ll never be able to say anything about it, pretty much by definition. In my experience all those curious people are talking about badly-understood notions deriving from beliefs of body-mind duality. No matter how much we learn about objective experience, even if we can manipulate somebody’s experience in any way we like, people can still say they don’t understand subjective experience.
It’s easy to think that because we experience things (as a verb), there must be some subjective experience to talk about. But my position is that if we can’t formulate a question about subjective experience—a question that will make us behave differently depending on the answer—then there’s nothing to talk about. We can go searching for answers, but there’s no such thing as searching for a question. Are we supposed to one day think of a question and be enlightened? But that question exists in its own right, and can be answered if we ever think it’s important in the way that any question may be important. Meanwhile, if we have some kind of psychological drive to look for The Question, we might as well ignore that drive or look for a way to suppress it—just as we do with other unprofitable, un-fullfillable drives.
That’s my position, anyway...
In a timeless view, causality is just (regular) correlation in spacetime, as Egan says. I’m not sure what you are saying, though.
I also asked if simulating one or just a few mental states, instead of the whole evolution of your mental state over time, created some kind of subjective experience? In that case, would it be morally wrong to keep a highly detailed scan of your brain taken when you were feeling sad?
That’s entirely a matter of definition—the definitions of “consciousness” and of “causality”. You can define them any way you like, but what do you actually learn about reality from this?
This part of Dust theory strikes me as leading up to the conclusion that “there are many conscious states!” without defining what consciousness means, and so not actually saying anything.
You don’t address my central claim: that the mapping of ‘mental states’ to the physical representation used is arbitrary. If I can build a physics-simulator as a state machine encoding complete rules of physical evolution, then by the principles of Turing equivalence, I can build some other machine that uses any mapping I like from physical states to numbers (each number representing the simulator’s state). Does it then generate subjective experience in “someone”, and is it morally significant, to feed that machine some given number—even the number 1 for instance—because I built the machine to represent torture with that number?
You say of this,
But why not? I can choose the mapping as I like.
Do you mean that it’s difficult to get the information needed to build a machine that correctly maps the number 1 to your suffering?
I can do this, for instance, by observing you in a normal state—recording what your brain looks like—then applying my understanding of nervous systems pain signals to build a description of your brain suffering pain. It’s not difficult in principle.
Or do you mean I might simulate your brain suffering without thereby creating a suffering mind? Why? And why would this be anything more than a matter of arbitrary definition?
That means there are (infinitely) many entities you are, with many different experiences. And all of them are you. That sounds like an… unorthodox use of the word “you” :-)
Why call this hypothetical collection of persons “you” (or indeed “me”) if it contains many different persons and doesn’t match our existing use of the word “you”?
I coined the term “subjective Dust theory” to mean Dust theory as applied to me and my subjective experience (producing conclusions such as “I’m necessarily immortal”), as opposed to Dust theory applied to other minds.
Obligatory question: given what observations would you assign high probability to the possibility that you are a dust-mind? Why would you privilege it over competing theories, which include:
You, but not all other possible experiences, are being simulated by someone for some reason.
Your memory or other parts of your mind are faulty, making you falsely remember or experience a chaotic universe (or whatever it is you observed).
Someone is deliberately messing / has messed with your mind, result as above.
You are a Boltzmann brain. (Unlike a dust-mind, you live normally over time, it’s just that you and your surroundings were created out of chaos by chance.)
Arguably, each of these theories is much more specific than the idea that you’re a dust-mind. That is, Dust theory predicts that there will be (infinitely many?) versions of you, some of which are being simulated, others have faulty memory, yet others are having their minds messed with by aliens, and still others are genuine Boltzmann brains. So in the absence of evidence to choose one of these, we should stick with the most general applicable theory—Dust theory.
On the other hand, the classes of all simulations and of all Boltzmann-brains also include all dust-minds… (You can simulate a universe containing a dust mind, and a a dust mind’s states can come about by Boltzmann chance.) So it’s not conclusive.
I was actually going to remark in the original comment and my previous one that I thought “subjective experience” was redundant. I truly have no idea what non-subjective experience could possibly be. “Subjective experience” isn’t something that is contrasted from other kinds of experience. It isn’t my coinage, as far as I know it is a legacy term but helpful in that it combines ‘the subject’ with ‘experiencing’. If that makes you uncomfortable by all means replace every instance of ‘subjective experience’ with ‘experience’. I think you can safely do the same with ‘consciousness’ or ‘qualia’ but I imagine you don’t like those terms either.
The mercury in my barometer always drops before a thunderstorm. My barometer has never caused a thunderstorm. Thus, I prefer a counterfactual theory of causation. If you think Egan is right then how is a dust mind different from “causal sequences of physical states, i.e. outright simulations”>
I think the argument requires that there be more than one mental state, though one can skip mental states. But lets say you had three detailed scans from a period of sadness. Whether or not it is immoral would depend on whether or not we distinguish identical and simultaneous copies of persons in our utility function. But if you do care about such copies then yeah, it wouldn’t be the nicest thing to do.
The concepts of ‘consciousness’ and ‘causality’ describe features of the way we relate to the external world. I would like a coherent picture of this relation. Cause and effect, in particular is a huge part of how we experience the world. How this concept relates to what is actually going on is a really interesting question. If a system needs to be the kind of system we recognize as a causal system in order to produce a subject that experiences the world that would be something interesting to know. Getting a really precise definition about what consciousness is would be really cool. I know there are a lot of people working on it but that isn’t me. I don’t at all think that one needs a really precise analytic definition of a concept in order to employ it or say meaningful things about it.
I just meant that something can’t just be labelled “Jack suffering”. There has to actually be a set of patterns that represent it. The set of real numbers, for example, is sufficiently complex that if it is represented in the universe (say by an infinite number of particles) then according to the dust theory that representation includes a pattern that is Jack suffering. But it also includes a pattern that is Jack really happy. And it includes lots of other patterns. You saying “This is Jack suffering” doesn’t change that. The information is there even if you aren’t reading it that way./*
Now what you might be able to do is build a really well determined structure such that only the states of me suffering are represented. I don’t really know. If you can though, I’m inclined to say that what you end up being will just be a run-of-the-mill simulation of me suffering, something we’d all recognize as bad. The only way to determine that you’ve created a pattern that is exactly as you want it to be is to run a regular old person-simulation—I think.
But I might not be responding to your concern, I’m still pretty confused about what that is.
Yeah, I put it more delicately in the undergraduate thesis proposal I just turned in. But yeah it is unorthodox. :-) But the alternative is to give up a coherent account of personal identity altogether, as far as I’m concerned.
Long answer, I’ll come back to it.
I’m not sure we can distinguish skeptical hypotheses by empirical evidence—I’m pretty sure we can’t by definition. But we might find empirical evidence that alters our estimations of the premises of those positions (our understanding of entropy could change, our understanding of brain simulations will surely change once we actually succeed, etc. Also we might be able to distinguish them according to a priori criteria like parsimony.