I also want to emphasize that P is your own personal experience, not any abstract “subject’s”. It’s the one that you can access directly.
Er… By “your”, do you mean to refer to me, personally? I’ll assume that’s what you meant unless you specify otherwise. Henceforth I am the subject! :-D
I would agree with your statement if you removed the word “completely”.
But that’s the crux! I know I’m conscious in a way that is so devastatingly self-evident that “evidence” to the contrary would render itself meaningless. But if some theory for P were developed that demonstrated that Q doesn’t exist, I wouldn’t view that theory as nonsensical. It’d be surprising, but not blatantly self-contradictory like a theory that says P doesn’t exist. I believe in Q for highly fallible reasons, but I believe in P for completely different reasons that don’t seem to be at all fallible to me. I deduce Q but I don’t deduce P.
(Although I wonder if we’re just spinning our wheels in the muck produced from a fuzzy word. If we both agree that P is self-evident while Q is deduced from Pq, perhaps there’s no disagreement...?)
Obviously, you know you are conscious, and you can experience P directly. However, you can also collect the same kind of data on yourself (or have someone, or some thing, do it for you) as you would on other people. For example, you could get your brain scanned, record your own voice and then play it back, install a sensor on your fridge that records your feeding habits, etc.; these are all real pieces of evidence that people are routinely collecting for practical purposes.
Agreed. Notice, though, that the only way I’m able to correlate this Q-like data with P is because I can see the results of, say, the brain scan and recognize that it pairs with a particular part of P. For instance, I can tell that a certain brain scan corresponds with when I’m mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process.
If you think that the above paragraph is true, then it would follow that you (probably) can collect some data on your own Q, as it would be experienced by someone else who is conscious (assuming, again, that you are not the only conscious being in the Universe, and that your own consciousness is not privileged in any cosmic way).
Mostly agreed. If others experience, then others experience. :-)
The main point at which I disagree is that P is privileged. There’s no such thing as a P-less perspective. But if we’re granting that others are actually conscious (i.e., that Q exists) and that we can switch subjects with a sort of P-transformation (i.e., we can grant that you have P and that within your P my consciousness is part of Q), then I think that might not be terribly important to your point. We can mimic strong objectivity by looking at those truths that remain invariant under such transformations.
If you agree with that as well, then, assuming that we ever develop a good enough model of Q which would allow you to predict any person’s behavior with some useful degree of certainty, such a model would then be able to predict your own behavior with some useful degree of certainty.
Hmm… “behavior” is being used in two different ways here. When we use our “theory of Q” to make predictions, what we’re doing is assuming that Q exists and is indicated by Pq, and then we make predictions about what happens to Pq under certain circumstances. On the other hand, when we look at my “behavior”, what we’re considering is my P in a wider scope going beyond just Pq. For instance, others claim that they see blue when we shine light of a wavelength of 450 nm into their functional eyes. When we shine such light into my eyes, I see blue. Those are two very different kinds of “behavior” from my perspective!
But presumably under the P-transformation mentioned earlier, other subjects actually do experience blue, too. So we’ll just go with this. :-)
If you agree that the above is possible, then we can go one step further. A good model of Q would not only predict what a person would do, but also what he would think...
I agree with what you elaborate upon after this. Since the “behavior” here is a kind of experience, I would include the experience of thinking in that. So yes, already granted.
At this point, we have a model that can explain both your thoughts and your actions, and it does so solely based on external evidence. It seems like there’s nothing left for P to explain, since Q explains everything.
I wonder if you arranged your sentence a little bit backwards...? I think you meant to say, “It seems like there’s nothing left of P to explain, since our theory of Q explains everything.” Is that what you meant?
If so, then sure. There’s a detail here I’m uneasy about, but I think it’s minor enough to ignore (rather than write three more paragraphs on!).
Thus, P is a null concept; this is the “objective truth that this “bias” is causing you to mentally deviate from”, which you asked about in your comment. That is, the “objective truth” is that P can be fully explained solely in terms of Q, even though it doesn’t feel like it could be.
Hmm. You seem to be saying two different things here as though they’re the same thing. One I strongly disagree with, and the other I half-agree with.
The one I half-agree with is that based on the trajectory you describe, it seems we can describe P with the same brush we use to explain Q. The half I hesitate about is this claim that we can just equate P and Q. That’s the part that is to be explained! But perhaps something would arise in the process of elaborating on a theory of Q.
The part I totally disagree with is the claim that “P is a null concept”. Any theory that disregards P as a hallucination, or irrelevant, or a bias of any sort, is incoherent. I’ll grant that the impression that P is special could turn out to be a bias, but not P itself. And we can’t disregard the relevance of P. How would we ever gain evidence that P can be disregarded? Doesn’t that evidence have to come through P?
But I do agree:
We should be able to predict Pq with evidence that remains fixed under a P-transformation.
It seems easier and more consistent to assume that Pq points to an extant Q.
If Q exists, then under a P-transformation my experience (previously P) is part of Q.
Therefore, a full model of Pq should offer a kind of explanation of P.
But I still don’t see how this model actually connects P and Q. It just assumes that Q exists and that it’s a kind of P (i.e., that P-transformations make sense and are possible).
Eeergh, that’s a whole other topic for a whole other thread...
Fair enough!
It’s much like how you can never know for sure that you’re not dreaming: any test you can perform is a test you can dream. There’s no way out even in principle.
Why not just use Occam’s Razor ?
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
I’m reminded of some brilliant times I’ve tried to become lucid in my dreams. I look at an elephant standing in my living room and think, “Why is there an elephant in my living room? That’s awfully odd. Could I be dreaming? Well, if I were, this would be really strange without much of an explanation. But the elephant is here because I went to China and drank tea with a spoon. That makes sense, so clearly I’m not dreaming.”
So when you go through an analysis of whether the assumption that you’re awake yields shorter code in its description than the assumption that you’re dreaming does, how sure can you really be that you have any evidence at all that you’re not dreaming? Sure, you can resort to Bayesian analysis—but how do you know you didn’t just concoct that in your dream tonight and that it’s actually gibberish?
I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time (which might be part of why I’m not lucid in more of my dreams!). But if you really want to tackle the issue, you’re going to run into some pretty basic epistemic obstacles. How do you come to any conclusions at all when anything you think you know could have been completely fabricated in the last three seconds?
Er… By “your”, do you mean to refer to me, personally? I’ll assume that’s what you meant unless you specify otherwise.
Yep, that’s right. I’m just electrons in a circuit as far as you’re concerned ! :-)
I know I’m conscious in a way that is so devastatingly self-evident that “evidence” to the contrary would render itself meaningless. … If we both agree that P is self-evident while Q is deduced from Pq, perhaps there’s no disagreement...?
Sure, that makes sense, but I’m not trying to abolish P altogether. All I’m trying to do is establish that P and Q are the same thing (most likely), and thus the “Hard Problem of Consciousness” is a non-issue. Thus, I can agree with the last sentence in the quote above, but that probably isn’t worth much as far as our discussion is concerned.
For instance, I can tell that a certain brain scan corresponds with when I’m mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process.
I’m not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn’t indicate that you’re mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I’m not sure what you’re driving at.
I agree with what you elaborate upon after this. Since the “behavior” here is a kind of experience, I would include the experience of thinking in that.
When I used the word “behavior”, I actually had a much narrower definition in mind—i.e., “something that we and our instruments can observe”. So, brain scans would fit into this category, but also things like, “the subject answers ‘blue’ when we ask him what color this 450nm light is”. I deliberately split up “what the test subject would say” from “what he will actually think and experience”. But it seems like you agree with both points, maybe:
I think you meant to say, “It seems like there’s nothing left of P to explain, since our theory of Q explains everything.” Is that what you meant?
Pretty much. What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, “sure, the theory of gravity fully explains why the Earth doesn’t fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening”. Sure, the gnomes could exist, but there are lots of things that could exist...
The one I half-agree with is that based on the trajectory you describe, it seems we can describe P with the same brush we use to explain Q. The half I hesitate about is this claim that we can just equate P and Q.
If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, “sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn’t mean that we can just equate electricity and magnetism”.
Maybe we disagree because of this:
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
Well, yeah, Occam’s Razor isn’t an oracle… It seems to me like we might have a fundamental disagreement about epistemology. You say “I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time”; I’m in total agreement there. But then, you say,
But if you really want to tackle the issue, you’re going to run into some pretty basic epistemic obstacles. How do you come to any conclusions at all when anything you think you know could have been completely fabricated in the last three seconds?
I personally don’t see any issues to tackle. Sure, I could be dreaming. I could also be insane, or a simulation, or a brain in a jar, or an infinite number of other things. But why should I care about these possibilities—not just “most of the time”, but at all ? If there’s no way, by definition, for me to tell whether I’m really, truly awake; and if I appear to be awake; then I’m going to go ahead and assume I’m awake after all. Otherwise, I might have to consider all of the alternatives simultaneously, and since there’s an infinite number of them, it would take a while.
It looks like you firmly disagree with the paragraph above, but I still can’t see why. But that does explain (if somewhat tangentially) why you believe that the “Hard Problem of Consciousness” is a legitimate problem, and why I do not.
You know, something clicked last night as I was falling asleep, and I realized why you’re right and where my confusion has been. But thanks for giving me something specific to work from! :-D
I think my argument can be summarized like so:
All data comes through P.
Therefore, all data about P comes through P.
All theories about P must be verified through data about P.
This means P is required to explain P.
Therefore, it doesn’t seem like there can be an explanation about P.
That last step is nuts. Here’s an analogy:
All (visual) data is seen.
Therefore, all (visual) data about how we see is seen.
All theories of vision must be verified through data about vision. (Let’s say we count only visual data. So we can use charts, but not the way an optic nerve feels to the touch.)
This means vision is required to explain vision.
Therefore, it doesn’t seem like there can be an explanation of vision.
The glaring problem is that explaining vision doesn’t render it retroactively useless for data-collection.
Thanks for giving me time to wrestle with this dumbth. Wrongness acknowledged. :-)
I’m not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn’t indicate that you’re mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I’m not sure what you’re driving at.
What I was driving at is that there’s no evidence that it corresponds to mentally rehearsing Mozart for anyone until I look at my own brain scan. All we can correlate the brain scans with is people’s reports of what they were doing. For instance, if my brain scan said I was rehearsing Mozart but I wasn’t, and yet I was inclined to report that I was, that would give me reason for concern.
The confusion here comes down to a point that I still think is true, but only because I think it’s tautological: From my point of view, my point of view is special. But I’m not sure what it would mean for this to be false, so I’m not sure there’s any additional information in this point—aside from maybe an emotional one (e.g., there’s a kind of emotional shift that occurs when I make the empathic shift and realize what something feels like from another person’s perspective rather than just my own).
What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, “sure, the theory of gravity fully explains why the Earth doesn’t fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening”. Sure, the gnomes could exist, but there are lots of things that could exist...
Well, I do know that P exists, and I know that from my point of view P is extremely special. That’s not invisible gnomes; it’s just true. But saying “from my point of view P is extremely special” is tautological since P is my perspective. When something is a tautology, there’s nothing to explain. That’s why it’s hard to come up with an explanation for it. :-P
If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, “sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn’t mean that we can just equate electricity and magnetism”.
I agree with you now.
Maybe we disagree because of this:
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
Oh, no no no! I didn’t mean to make a particularly big deal out of the possibility that we’re dreaming. I was trying to point out an analogous situation. There’s no plausible way to gather data in favor of the hypothesis that we’re not dreaming because the epistemology itself is entirely contained within the dream. I figured that might be easier to see than the point I was trying to make, which was the bit of balderdash that there’s no way to gather evidence in favor of P arising from something else because that evidence has to come through P. The arguments are somewhat analogous, only the one for dreaming works and the one for P doesn’t.
I personally don’t see any issues to tackle. Sure, I could be dreaming. I could also be insane, or a simulation, or a brain in a jar, or an infinite number of other things. But why should I care about these possibilities—not just “most of the time”, but at all ?
Two and a half points:
Again, this was meant to be an analogy. I wasn’t trying to argue that we can’t trust our data-collection process because we could be dreaming. I meant to offer a situation about dreaming that seemed analogous to the situation with consciousness. I was hoping to illustrate where the “hard” part of the hard problem of consciousness is by pointing out where the “hard” part in what I suppose we could call the “hard problem of dreaming” is.
This issue actually does become extremely pragmatic as soon as you start trying to practice lucid dreaming. The mind seems to default to assuming that whatever is being experienced is being experienced in a wakeful state, at least for most people. You have to challenge that to get to lucid dreaming. There have been many times where I’ve been totally sure I’m awake after asking myself if I’m dreaming, and have even done dream-tests like trying to read text and trying to fly, only to discover that all my testing and certainty was ultimately irrelevant because once I wake up, I can know with absurdly high probability that I was in fact dreaming.
Closely related to that second point is the fact that you know you dream regularly. In fact, there’s quite a bit of evidence to suggest that pretty much everyone dreams several times every night. Most people aren’t crazy, or discover that they’re brains in a jar, or whatever every day. So if there’s a way that everything you know could be completely wrong, the possibility that you’re dreaming is much, much higher on the list of hypotheses than that, say, you have amnesia and are on the Star Trek holodeck. So picking out dreaming as a particular issue to be concerned about over the other possibilities isn’t really committing the fallacy of privileging the hypothesis. If we’re going to go with “You’re hallucinating everything you know,” the “You’re dreaming” hypothesis is a pretty darn good one to start with!
Again, though, I’m not trying to argue that we could be dreaming and therefore we can’t trust what we know. I was trying to point out an analogy which, upon reflection, doesn’t actually work.
All right, so it seems like we mostly agree now—cool !
I meant to offer a situation about dreaming that seemed analogous to the situation with consciousness.
Ok, I get it now, but I would still argue that we should assume we’re awake, until we have some evidence to the contrary; thus, the “hard problem of dreaming” is a non-issue. It looks like you might agree with me, somewhat:
This issue actually does become extremely pragmatic as soon as you start trying to practice lucid dreaming. The mind seems to default to assuming that whatever is being experienced is being experienced in a wakeful state, at least for most people. You have to challenge that to get to lucid dreaming.
In this situation, we assume that we’re awake a priori, and we are then deliberately trying to induce dreaming (which should be lucid, a well). So, we need a test that tells us whether we’ve succeeded or not. Thus, we need to develop some evidence-collecting techniques that work even when we’re asleep. This seems perfectly reasonable to me, but the setup is not analogous to your previous one—since we start out with the a priori assumption that we’re currently in the awake state; that we could transition to the dream state when we choose; and that there exists some evidence that will tell us which state we’re in. By contrast, the “hard problem of dreaming” scenario assumes that we don’t know which state we’re in, and that there’s no way to collect any relevant evidence at all.
All right, so it seems like we mostly agree now—cool !
Yep!
Rationality training: helping minds change since 2002. :-D
Ok, I get it now, but I would still argue that we should assume we’re awake, until we have some evidence to the contrary; thus, the “hard problem of dreaming” is a non-issue.
You’re coming at it from a philosophical angle, I think. I’m coming at it from a purely pragmatic one. Let’s say you’re dreaming right now. If you start with the assumption that you’re awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you’re really awake. Even if your empirical tests conclusively show that you’re dreaming, dreams have a way of screwing with your reasoning process so that early assumptions don’t update on evidence.
For instance, a typical dream test is jumping up in the air and trying to stay there a bit longer than physics would allow. The goal, usually, is flight. I commonly find that if I jump into the air and then hang there for just a little itty bitty bit longer than physics would allow, I think something like, “Oh, that was barely longer than possible. So I must not be quite dreaming.” That makes absolutely no sense at all, but it’s worth bearing in mind that you typically don’t have your whole mind available to you when you’re trying to become lucid. (You might once you are lucid, but that’s not terribly useful, is it?)
In this case, you have to be really, insanely careful not to jump to the conclusion that you’re awake. If you think you’re awake, you have to pause and ask yourself, “Well, is there any way I could be mistaken?” Otherwise your stupid dreaming self will just go along with the plot and ignore the floating pink elephants passing through your living room walls. This means that when you’re working on lucid dreaming, it usually pays to recognize that you could be dreaming and can never actually prove conclusively that you’re awake.
But I agree with you in all cases where lucid dreaming isn’t of interest. :-)
You’re coming at it from a philosophical angle, I think. I’m coming at it from a purely pragmatic one.
That’s funny, I was about to say the same thing, only about yourself instead of me. But I think I see where you’re coming from:
If you start with the assumption that you’re awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you’re really awake… it’s worth bearing in mind that you typically don’t have your whole mind available to you when you’re trying to become lucid.
So, your primary goal (in this specific case) is not to gain any new insights about epistemology or consciousness or whatever, but to develop a useful skill: lucid dreaming. In this case, yes, your assumptions make perfect sense, since you must correct for an incredibly strong built-in bias that only surfaces while you’re dreaming. That makes sense.
I guess so!
Er… By “your”, do you mean to refer to me, personally? I’ll assume that’s what you meant unless you specify otherwise. Henceforth I am the subject! :-D
But that’s the crux! I know I’m conscious in a way that is so devastatingly self-evident that “evidence” to the contrary would render itself meaningless. But if some theory for P were developed that demonstrated that Q doesn’t exist, I wouldn’t view that theory as nonsensical. It’d be surprising, but not blatantly self-contradictory like a theory that says P doesn’t exist. I believe in Q for highly fallible reasons, but I believe in P for completely different reasons that don’t seem to be at all fallible to me. I deduce Q but I don’t deduce P.
(Although I wonder if we’re just spinning our wheels in the muck produced from a fuzzy word. If we both agree that P is self-evident while Q is deduced from Pq, perhaps there’s no disagreement...?)
Agreed. Notice, though, that the only way I’m able to correlate this Q-like data with P is because I can see the results of, say, the brain scan and recognize that it pairs with a particular part of P. For instance, I can tell that a certain brain scan corresponds with when I’m mentally rehearsing a Mozart piece because I experienced the rehearsal when the brain scanning occurred. So P is still implicit in the data-collection and -interpretation process.
Mostly agreed. If others experience, then others experience. :-)
The main point at which I disagree is that P is privileged. There’s no such thing as a P-less perspective. But if we’re granting that others are actually conscious (i.e., that Q exists) and that we can switch subjects with a sort of P-transformation (i.e., we can grant that you have P and that within your P my consciousness is part of Q), then I think that might not be terribly important to your point. We can mimic strong objectivity by looking at those truths that remain invariant under such transformations.
Hmm… “behavior” is being used in two different ways here. When we use our “theory of Q” to make predictions, what we’re doing is assuming that Q exists and is indicated by Pq, and then we make predictions about what happens to Pq under certain circumstances. On the other hand, when we look at my “behavior”, what we’re considering is my P in a wider scope going beyond just Pq. For instance, others claim that they see blue when we shine light of a wavelength of 450 nm into their functional eyes. When we shine such light into my eyes, I see blue. Those are two very different kinds of “behavior” from my perspective!
But presumably under the P-transformation mentioned earlier, other subjects actually do experience blue, too. So we’ll just go with this. :-)
I agree with what you elaborate upon after this. Since the “behavior” here is a kind of experience, I would include the experience of thinking in that. So yes, already granted.
I wonder if you arranged your sentence a little bit backwards...? I think you meant to say, “It seems like there’s nothing left of P to explain, since our theory of Q explains everything.” Is that what you meant?
If so, then sure. There’s a detail here I’m uneasy about, but I think it’s minor enough to ignore (rather than write three more paragraphs on!).
Hmm. You seem to be saying two different things here as though they’re the same thing. One I strongly disagree with, and the other I half-agree with.
The one I half-agree with is that based on the trajectory you describe, it seems we can describe P with the same brush we use to explain Q. The half I hesitate about is this claim that we can just equate P and Q. That’s the part that is to be explained! But perhaps something would arise in the process of elaborating on a theory of Q.
The part I totally disagree with is the claim that “P is a null concept”. Any theory that disregards P as a hallucination, or irrelevant, or a bias of any sort, is incoherent. I’ll grant that the impression that P is special could turn out to be a bias, but not P itself. And we can’t disregard the relevance of P. How would we ever gain evidence that P can be disregarded? Doesn’t that evidence have to come through P?
But I do agree:
We should be able to predict Pq with evidence that remains fixed under a P-transformation.
It seems easier and more consistent to assume that Pq points to an extant Q.
If Q exists, then under a P-transformation my experience (previously P) is part of Q.
Therefore, a full model of Pq should offer a kind of explanation of P.
But I still don’t see how this model actually connects P and Q. It just assumes that Q exists and that it’s a kind of P (i.e., that P-transformations make sense and are possible).
Fair enough!
Because if you were dreaming, your idea of Occam’s Razor would be contained within the dream.
I’m reminded of some brilliant times I’ve tried to become lucid in my dreams. I look at an elephant standing in my living room and think, “Why is there an elephant in my living room? That’s awfully odd. Could I be dreaming? Well, if I were, this would be really strange without much of an explanation. But the elephant is here because I went to China and drank tea with a spoon. That makes sense, so clearly I’m not dreaming.”
So when you go through an analysis of whether the assumption that you’re awake yields shorter code in its description than the assumption that you’re dreaming does, how sure can you really be that you have any evidence at all that you’re not dreaming? Sure, you can resort to Bayesian analysis—but how do you know you didn’t just concoct that in your dream tonight and that it’s actually gibberish?
I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time (which might be part of why I’m not lucid in more of my dreams!). But if you really want to tackle the issue, you’re going to run into some pretty basic epistemic obstacles. How do you come to any conclusions at all when anything you think you know could have been completely fabricated in the last three seconds?
Yep, that’s right. I’m just electrons in a circuit as far as you’re concerned ! :-)
Sure, that makes sense, but I’m not trying to abolish P altogether. All I’m trying to do is establish that P and Q are the same thing (most likely), and thus the “Hard Problem of Consciousness” is a non-issue. Thus, I can agree with the last sentence in the quote above, but that probably isn’t worth much as far as our discussion is concerned.
I’m not sure how these two sentences are connected. Obviously, a perfect brain scan shouldn’t indicate that you’re mentally rehearsing Mozart when you are not, in fact, mentally rehearsing Mozart. But such a brain scan will work on anyone, not just you, so I’m not sure what you’re driving at.
When I used the word “behavior”, I actually had a much narrower definition in mind—i.e., “something that we and our instruments can observe”. So, brain scans would fit into this category, but also things like, “the subject answers ‘blue’ when we ask him what color this 450nm light is”. I deliberately split up “what the test subject would say” from “what he will actually think and experience”. But it seems like you agree with both points, maybe:
Pretty much. What I meant was that, since our theory of Q explains everything, we gain nothing (intellectually speaking) by postulating hat P and Q are different. Doing so would be similar to saying, “sure, the theory of gravity fully explains why the Earth doesn’t fall into the Sun, but there must also be invisible gnomes constantly pushing the Earth away to prevent that from happening”. Sure, the gnomes could exist, but there are lots of things that could exist...
If you agree with the first part, what are your reasons for disagreeing with the second ? To me, this sounds like saying, “sure, we can explain electricity with the same theory we use to explain magnetism, but that doesn’t mean that we can just equate electricity and magnetism”.
Maybe we disagree because of this:
Well, yeah, Occam’s Razor isn’t an oracle… It seems to me like we might have a fundamental disagreement about epistemology. You say “I think in the end it’s just not very pragmatically useful to suppose I’m dreaming, so I don’t worry too much about this most of the time”; I’m in total agreement there. But then, you say,
I personally don’t see any issues to tackle. Sure, I could be dreaming. I could also be insane, or a simulation, or a brain in a jar, or an infinite number of other things. But why should I care about these possibilities—not just “most of the time”, but at all ? If there’s no way, by definition, for me to tell whether I’m really, truly awake; and if I appear to be awake; then I’m going to go ahead and assume I’m awake after all. Otherwise, I might have to consider all of the alternatives simultaneously, and since there’s an infinite number of them, it would take a while.
It looks like you firmly disagree with the paragraph above, but I still can’t see why. But that does explain (if somewhat tangentially) why you believe that the “Hard Problem of Consciousness” is a legitimate problem, and why I do not.
You know, something clicked last night as I was falling asleep, and I realized why you’re right and where my confusion has been. But thanks for giving me something specific to work from! :-D
I think my argument can be summarized like so:
All data comes through P.
Therefore, all data about P comes through P.
All theories about P must be verified through data about P.
This means P is required to explain P.
Therefore, it doesn’t seem like there can be an explanation about P.
That last step is nuts. Here’s an analogy:
All (visual) data is seen.
Therefore, all (visual) data about how we see is seen.
All theories of vision must be verified through data about vision. (Let’s say we count only visual data. So we can use charts, but not the way an optic nerve feels to the touch.)
This means vision is required to explain vision.
Therefore, it doesn’t seem like there can be an explanation of vision.
The glaring problem is that explaining vision doesn’t render it retroactively useless for data-collection.
Thanks for giving me time to wrestle with this dumbth. Wrongness acknowledged. :-)
What I was driving at is that there’s no evidence that it corresponds to mentally rehearsing Mozart for anyone until I look at my own brain scan. All we can correlate the brain scans with is people’s reports of what they were doing. For instance, if my brain scan said I was rehearsing Mozart but I wasn’t, and yet I was inclined to report that I was, that would give me reason for concern.
The confusion here comes down to a point that I still think is true, but only because I think it’s tautological: From my point of view, my point of view is special. But I’m not sure what it would mean for this to be false, so I’m not sure there’s any additional information in this point—aside from maybe an emotional one (e.g., there’s a kind of emotional shift that occurs when I make the empathic shift and realize what something feels like from another person’s perspective rather than just my own).
Well, I do know that P exists, and I know that from my point of view P is extremely special. That’s not invisible gnomes; it’s just true. But saying “from my point of view P is extremely special” is tautological since P is my perspective. When something is a tautology, there’s nothing to explain. That’s why it’s hard to come up with an explanation for it. :-P
I agree with you now.
Oh, no no no! I didn’t mean to make a particularly big deal out of the possibility that we’re dreaming. I was trying to point out an analogous situation. There’s no plausible way to gather data in favor of the hypothesis that we’re not dreaming because the epistemology itself is entirely contained within the dream. I figured that might be easier to see than the point I was trying to make, which was the bit of balderdash that there’s no way to gather evidence in favor of P arising from something else because that evidence has to come through P. The arguments are somewhat analogous, only the one for dreaming works and the one for P doesn’t.
Two and a half points:
Again, this was meant to be an analogy. I wasn’t trying to argue that we can’t trust our data-collection process because we could be dreaming. I meant to offer a situation about dreaming that seemed analogous to the situation with consciousness. I was hoping to illustrate where the “hard” part of the hard problem of consciousness is by pointing out where the “hard” part in what I suppose we could call the “hard problem of dreaming” is.
This issue actually does become extremely pragmatic as soon as you start trying to practice lucid dreaming. The mind seems to default to assuming that whatever is being experienced is being experienced in a wakeful state, at least for most people. You have to challenge that to get to lucid dreaming. There have been many times where I’ve been totally sure I’m awake after asking myself if I’m dreaming, and have even done dream-tests like trying to read text and trying to fly, only to discover that all my testing and certainty was ultimately irrelevant because once I wake up, I can know with absurdly high probability that I was in fact dreaming.
Closely related to that second point is the fact that you know you dream regularly. In fact, there’s quite a bit of evidence to suggest that pretty much everyone dreams several times every night. Most people aren’t crazy, or discover that they’re brains in a jar, or whatever every day. So if there’s a way that everything you know could be completely wrong, the possibility that you’re dreaming is much, much higher on the list of hypotheses than that, say, you have amnesia and are on the Star Trek holodeck. So picking out dreaming as a particular issue to be concerned about over the other possibilities isn’t really committing the fallacy of privileging the hypothesis. If we’re going to go with “You’re hallucinating everything you know,” the “You’re dreaming” hypothesis is a pretty darn good one to start with!
Again, though, I’m not trying to argue that we could be dreaming and therefore we can’t trust what we know. I was trying to point out an analogy which, upon reflection, doesn’t actually work.
All right, so it seems like we mostly agree now—cool !
Ok, I get it now, but I would still argue that we should assume we’re awake, until we have some evidence to the contrary; thus, the “hard problem of dreaming” is a non-issue. It looks like you might agree with me, somewhat:
In this situation, we assume that we’re awake a priori, and we are then deliberately trying to induce dreaming (which should be lucid, a well). So, we need a test that tells us whether we’ve succeeded or not. Thus, we need to develop some evidence-collecting techniques that work even when we’re asleep. This seems perfectly reasonable to me, but the setup is not analogous to your previous one—since we start out with the a priori assumption that we’re currently in the awake state; that we could transition to the dream state when we choose; and that there exists some evidence that will tell us which state we’re in. By contrast, the “hard problem of dreaming” scenario assumes that we don’t know which state we’re in, and that there’s no way to collect any relevant evidence at all.
Yep!
Rationality training: helping minds change since 2002. :-D
You’re coming at it from a philosophical angle, I think. I’m coming at it from a purely pragmatic one. Let’s say you’re dreaming right now. If you start with the assumption that you’re awake and then look for evidence to the contrary, typically the dream will accomodate your assumption and let you conclude you’re really awake. Even if your empirical tests conclusively show that you’re dreaming, dreams have a way of screwing with your reasoning process so that early assumptions don’t update on evidence.
For instance, a typical dream test is jumping up in the air and trying to stay there a bit longer than physics would allow. The goal, usually, is flight. I commonly find that if I jump into the air and then hang there for just a little itty bitty bit longer than physics would allow, I think something like, “Oh, that was barely longer than possible. So I must not be quite dreaming.” That makes absolutely no sense at all, but it’s worth bearing in mind that you typically don’t have your whole mind available to you when you’re trying to become lucid. (You might once you are lucid, but that’s not terribly useful, is it?)
In this case, you have to be really, insanely careful not to jump to the conclusion that you’re awake. If you think you’re awake, you have to pause and ask yourself, “Well, is there any way I could be mistaken?” Otherwise your stupid dreaming self will just go along with the plot and ignore the floating pink elephants passing through your living room walls. This means that when you’re working on lucid dreaming, it usually pays to recognize that you could be dreaming and can never actually prove conclusively that you’re awake.
But I agree with you in all cases where lucid dreaming isn’t of interest. :-)
That’s funny, I was about to say the same thing, only about yourself instead of me. But I think I see where you’re coming from:
So, your primary goal (in this specific case) is not to gain any new insights about epistemology or consciousness or whatever, but to develop a useful skill: lucid dreaming. In this case, yes, your assumptions make perfect sense, since you must correct for an incredibly strong built-in bias that only surfaces while you’re dreaming. That makes sense.