From what I understand, things that are parsimonious are less likely than the simpler option because of compounded probability. Ie. something parsimonious requires a1..a25 to be true, while a simpler option just requires a1..a5 to be true.
My point is that I don’t see reason to think that we have any information about the probabilities. How can we say that “a1..a500 needs to be true in order for consciousness to remain after brain destruction”? What observations have we made that would lead us to think that? My feeling is that we’ve never actually made an observation that says x ⇒ unconsciousness, because we’ve never actually been able to infer a state of unconsciousness.
Note: Sorry if I’m just not understanding the point about parsimony. I know that everyone seems to disagree with me and is making that point, so I’ve been trying to understand it and think about how it disproves my current belief, but for the reasons I explain above I don’t think the argument that “parsimony ⇒ it’s unlikely that consciousness remains” is valid. That argument requires information about what causes unconsciousness that I don’t think we have.
We have some data on what preconditions seem to produce consciousness (ie. neuronal firing). However, this is just data on the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
So you agree that brains are sufficient to explain consciousness. This consists of a1...a5.
Let Hypothesis 1 be that brains are conscious.
Then, as Hypothesis 2, you have the other conscious beings (a5...a25). Note that H2 also believes that brains are conscious. (a1...a5). So you have...
H1: a1...a5 “brains are conscious”
H2: a1...a25 “brains are conscious, and there’s other types of consciousnesses outside our perception
H3:a1...a5, b1...b5: “brains are conscious, and there’s a teapot on Andromeda.
H4: a1...a5, c1...c5: Brains are conscious, and there is no such thing as a consciousness outside our perception.
H5: a1...a5, d1...d5: Brains are conscious, and there are no teapots in Andromeda.
H6: a1...a5, e1...e5: Brains are conscious, and there is no water in Andromeda
H7: a1...a5, f1...f5: Brains are conscious, and there is water in Andromeda
In order of parsimony, it’s obvious that H1>H2,H3,H4, H5, H6, H7, right?
Right, but your real question is: is H2 more parsimonious than H4. And you’re right, practically there is no mathematically rigorous way to get the answer.
The same can be said of H3 vs H5, but you’ve got a strong intuition that H5 is more likely, right? Can you prove it rigorously? No, but you’ve got a pretty strong sense that their are no teapots to be found on Pluto.
Similarly, between H6 and H7, you’ve got a fair sense that there is probably water somewhere on Andromeda. In fact, it would be more complicated to describe some circumstance by which Andromeda didn’t have any water.
So is consciousness more like the teapot, or more like the water molecule? Well, your description of the universe gets simpler when you don’t need to explain the teapot, whereas it gets more complex when you have to explain why there is no water in Andromeda.
Consider this, like all humans you have instinctive animist tendencies. You are emotionally biased to favor H2 as something that seems possible, because you emotionally think of consciousness as a simple, basic element of reality, like water. I say, consciousness is more like a teapot than it is like water.
Does Math make special exceptions for consciousnesses that it does not make for teapots? Think about it...you can imagine water forming on a star somewhere, it’s fairly simple. How do you envision these separate consciousnesses forming? All possible ways in which these extra-physical consciousnesses could form are really complex. Your description of the universe, once you add these extra consciousnesses in, is going to get larger, not smaller.
You don’t automatically find yourself scrambling to explain why there might not be afterlife...rather, you find yourself searching for an explanation for why there might be one. And that’s because afterlives make the description of the universe larger and more complex and therefore require you to generate a story.
I admit, this isn’t a proof, and you’re not going to get a proof. But it’s a really strong intuition.
...I wonder if it would help, if I came up with an unrelated idea that could only be rejected using intuition-parsimony and asked you to refute it. You’ll instinctively call on parsimony, and then you can apply the same methods to the afterlife hypothesis.
My point is that I don’t see reason to think that we have any information about the probabilities. How can we say that “a1..a500 needs to be true in order for consciousness to remain after brain destruction”? What observations have we made that would lead us to think that? My feeling is that we’ve never actually made an observation that says x ⇒ unconsciousness, because we’ve never actually been able to infer a state of unconsciousness.
Right, but your real question is: is H2 more parsimonious than H4.
So you say that it can’t be proven, but you have a “really strong intuition” that it is. Why? What observations have you made about what causes unconsciousness that would lead you to believe that it involves parsimony? And how have you been able to infer unconsciousness?
Because consciousness (regardless of whether it’s based in something physical and observable, like the brain) by its inherent nature involves complex information processing. Even if you separate consciousness from the brain and put it in the abstract-unobservable-spirit-place-thing, it’s still a mathematically defined structure with a lot of complexity. When the brain is right in front of you, you can point to it and say, there! that’s a complex information processing structure!”When the brain is no longer in front of you, you necessarily have to posit an un-observable* complex information processing structure in spirit land.
Just replace brain with any other object, and you″ll get the same intuition. How do you know that some sort of time-keeping doesn’t continue in an unobserved location, even after the physical clock is destroyed? How do you know that the teapot-ness doesn’t continue on somewhere after the actual teapot is destroyed? And what about your past selves? They are all now destroyed? Are they all continuing on somewhere?
by its inherent nature involves complex information processing
Interesting. There does seem to be evidence that you need a complex structure with complex information processing to provide a variety of conscious experiences. The evidence for this I think is just that outcomes have independent causes (on the smallest levels). You’d need a complex structure to take in all the different inputs and produce the corresponding conscious experience as the output. A simple structure can’t do that.
When the brain is no longer in front of you, you necessarily have to posit an *un-observable complex information processing structure in spirit land.
I wouldn’t say that. Right now, we have some data on what parts of the brain need to be active for consciousness, but we can’t measure things at a level of precision below single neurons. What if something is happening on a quantum level that underlies consciousness? What if the thing that underlies consciousness is present at a level below the quantum level, like something beyond our current understanding of physics? This is quite possible, and I don’t think it’s ridiculous to posit that this level may go undisturbed when the brain is destroyed.
So, as far as retaining “complex consciousness” (able to experience a variety of things) after the brain is destroyed, I see two possibilities:
The thing that causes the consciousness you experience when you’re alive is destroyed, but some new structure is created and provides you with complex consciousness.
The thing that is currently causing your complex consciousness remains intact, and thus continues to provide you with that complex consciousness.
I agree that 1) is unlikely for reasons of parsimony. It’s 2) that I’m questioning. Why is 2) more likely to be false than true?
Answering that question myself, I actually think that it is. If I knew more about physics I’d have a stronger opinion here, but I figure that when you destroy the brain on macro/microscopic level, it’s unlikely for the nano/quantum/small level that I’m saying consciousness might be on to go undisturbed.
So back to my original objection—“What have we observed that would tell us that x ⇒ unconsciousness”. Answer:
1) We’ve observed that the world is governed by cause and effect. Consider a given outcome. You can’t have two different physical states lead to that outcome. (I’m not explaining this well, but hopefully you know what I mean).
2) We’ve observed that consciousness involves a large variety of different outcomes (seeing red, seeing blue, feeling hot, feeling cold...). From 1) we can infer that there must be a certain physical state that leads to each of these outcomes.
3) We’ve observed that the brain is a complex structure that is correlated with consciousness. I don’t think we know the cause. Maybe it’s on the neuronal level. I think it probably has to be on a smaller level. But regardless, it seems likely that when you destroy the brain, you’d destroy whatever it is that’s producing consciousness, regardless of what level it’s on.
4) We’ve observed parsimony. So it’s unlikely that whatever causes consciousness will be regenerated out of the blue. once it’s destroyed.
So to be explicit, I think it’s unlikely that consciousness remains after the brain is destroyed.
But one thing might remain. There might be a sort of basic/flat level of consciousness, as opposed to “nothingness”. And as opposed to the idea that consciousness has to involve our complex consciousness of experiencing all the various thing we experience. There may be a basic level where we only sort of experience one thing.
If this level exists, how do we know that destroying the brain interferes with it? What do you think?
That’s all I’ve got for now. I probably haven’t expressed these points too clearly, as I’m just coming up with a lot of them and haven’t had time analyze them enough. Please let me know what you think, and if you could sum it up and express it a little more clearly than I have. Thanks for the conversation!
There might be a sort of basic/flat level of consciousness
That position is called pan-psychism. I don’t think pan-psychism violates the rules of parsimony, but I also think once you find yourself asking how quantum vacuums or a baseball bats or water molecules subjectively feel, you need to back up a bit.
If this level exists, how do we know that destroying the brain interferes with it? What do you think?
Personally? I think that my qualia is that which separates reality from all the other hypothetical mathematical structures, and just leave it at that (so I evaluate consciousness from a strictly information-processing standpoint).. Well, that’s the short version, I’d probably need to write a bit more for that to make sense.
From what I understand, things that are parsimonious are less likely than the simpler option because of compounded probability. Ie. something parsimonious requires a1..a25 to be true, while a simpler option just requires a1..a5 to be true.
My point is that I don’t see reason to think that we have any information about the probabilities. How can we say that “a1..a500 needs to be true in order for consciousness to remain after brain destruction”? What observations have we made that would lead us to think that? My feeling is that we’ve never actually made an observation that says x ⇒ unconsciousness, because we’ve never actually been able to infer a state of unconsciousness.
Note: Sorry if I’m just not understanding the point about parsimony. I know that everyone seems to disagree with me and is making that point, so I’ve been trying to understand it and think about how it disproves my current belief, but for the reasons I explain above I don’t think the argument that “parsimony ⇒ it’s unlikely that consciousness remains” is valid. That argument requires information about what causes unconsciousness that I don’t think we have.
So you agree that brains are sufficient to explain consciousness. This consists of a1...a5.
Let Hypothesis 1 be that brains are conscious.
Then, as Hypothesis 2, you have the other conscious beings (a5...a25). Note that H2 also believes that brains are conscious. (a1...a5). So you have...
H1: a1...a5 “brains are conscious”
H2: a1...a25 “brains are conscious, and there’s other types of consciousnesses outside our perception
H3:a1...a5, b1...b5: “brains are conscious, and there’s a teapot on Andromeda.
H4: a1...a5, c1...c5: Brains are conscious, and there is no such thing as a consciousness outside our perception.
H5: a1...a5, d1...d5: Brains are conscious, and there are no teapots in Andromeda.
H6: a1...a5, e1...e5: Brains are conscious, and there is no water in Andromeda
H7: a1...a5, f1...f5: Brains are conscious, and there is water in Andromeda
In order of parsimony, it’s obvious that H1>H2,H3,H4, H5, H6, H7, right?
Right, but your real question is: is H2 more parsimonious than H4. And you’re right, practically there is no mathematically rigorous way to get the answer.
The same can be said of H3 vs H5, but you’ve got a strong intuition that H5 is more likely, right? Can you prove it rigorously? No, but you’ve got a pretty strong sense that their are no teapots to be found on Pluto.
Similarly, between H6 and H7, you’ve got a fair sense that there is probably water somewhere on Andromeda. In fact, it would be more complicated to describe some circumstance by which Andromeda didn’t have any water.
So is consciousness more like the teapot, or more like the water molecule? Well, your description of the universe gets simpler when you don’t need to explain the teapot, whereas it gets more complex when you have to explain why there is no water in Andromeda.
Consider this, like all humans you have instinctive animist tendencies. You are emotionally biased to favor H2 as something that seems possible, because you emotionally think of consciousness as a simple, basic element of reality, like water. I say, consciousness is more like a teapot than it is like water.
Does Math make special exceptions for consciousnesses that it does not make for teapots? Think about it...you can imagine water forming on a star somewhere, it’s fairly simple. How do you envision these separate consciousnesses forming? All possible ways in which these extra-physical consciousnesses could form are really complex. Your description of the universe, once you add these extra consciousnesses in, is going to get larger, not smaller.
You don’t automatically find yourself scrambling to explain why there might not be afterlife...rather, you find yourself searching for an explanation for why there might be one. And that’s because afterlives make the description of the universe larger and more complex and therefore require you to generate a story.
I admit, this isn’t a proof, and you’re not going to get a proof. But it’s a really strong intuition.
...I wonder if it would help, if I came up with an unrelated idea that could only be rejected using intuition-parsimony and asked you to refute it. You’ll instinctively call on parsimony, and then you can apply the same methods to the afterlife hypothesis.
So you say that it can’t be proven, but you have a “really strong intuition” that it is. Why? What observations have you made about what causes unconsciousness that would lead you to believe that it involves parsimony? And how have you been able to infer unconsciousness?
Because consciousness (regardless of whether it’s based in something physical and observable, like the brain) by its inherent nature involves complex information processing. Even if you separate consciousness from the brain and put it in the abstract-unobservable-spirit-place-thing, it’s still a mathematically defined structure with a lot of complexity. When the brain is right in front of you, you can point to it and say, there! that’s a complex information processing structure!”When the brain is no longer in front of you, you necessarily have to posit an un-observable* complex information processing structure in spirit land.
Just replace brain with any other object, and you″ll get the same intuition. How do you know that some sort of time-keeping doesn’t continue in an unobserved location, even after the physical clock is destroyed? How do you know that the teapot-ness doesn’t continue on somewhere after the actual teapot is destroyed? And what about your past selves? They are all now destroyed? Are they all continuing on somewhere?
Interesting. There does seem to be evidence that you need a complex structure with complex information processing to provide a variety of conscious experiences. The evidence for this I think is just that outcomes have independent causes (on the smallest levels). You’d need a complex structure to take in all the different inputs and produce the corresponding conscious experience as the output. A simple structure can’t do that.
I wouldn’t say that. Right now, we have some data on what parts of the brain need to be active for consciousness, but we can’t measure things at a level of precision below single neurons. What if something is happening on a quantum level that underlies consciousness? What if the thing that underlies consciousness is present at a level below the quantum level, like something beyond our current understanding of physics? This is quite possible, and I don’t think it’s ridiculous to posit that this level may go undisturbed when the brain is destroyed.
So, as far as retaining “complex consciousness” (able to experience a variety of things) after the brain is destroyed, I see two possibilities:
The thing that causes the consciousness you experience when you’re alive is destroyed, but some new structure is created and provides you with complex consciousness.
The thing that is currently causing your complex consciousness remains intact, and thus continues to provide you with that complex consciousness.
I agree that 1) is unlikely for reasons of parsimony. It’s 2) that I’m questioning. Why is 2) more likely to be false than true?
Answering that question myself, I actually think that it is. If I knew more about physics I’d have a stronger opinion here, but I figure that when you destroy the brain on macro/microscopic level, it’s unlikely for the nano/quantum/small level that I’m saying consciousness might be on to go undisturbed.
So back to my original objection—“What have we observed that would tell us that x ⇒ unconsciousness”. Answer:
1) We’ve observed that the world is governed by cause and effect. Consider a given outcome. You can’t have two different physical states lead to that outcome. (I’m not explaining this well, but hopefully you know what I mean).
2) We’ve observed that consciousness involves a large variety of different outcomes (seeing red, seeing blue, feeling hot, feeling cold...). From 1) we can infer that there must be a certain physical state that leads to each of these outcomes.
3) We’ve observed that the brain is a complex structure that is correlated with consciousness. I don’t think we know the cause. Maybe it’s on the neuronal level. I think it probably has to be on a smaller level. But regardless, it seems likely that when you destroy the brain, you’d destroy whatever it is that’s producing consciousness, regardless of what level it’s on.
4) We’ve observed parsimony. So it’s unlikely that whatever causes consciousness will be regenerated out of the blue. once it’s destroyed.
So to be explicit, I think it’s unlikely that consciousness remains after the brain is destroyed.
But one thing might remain. There might be a sort of basic/flat level of consciousness, as opposed to “nothingness”. And as opposed to the idea that consciousness has to involve our complex consciousness of experiencing all the various thing we experience. There may be a basic level where we only sort of experience one thing.
If this level exists, how do we know that destroying the brain interferes with it? What do you think?
That’s all I’ve got for now. I probably haven’t expressed these points too clearly, as I’m just coming up with a lot of them and haven’t had time analyze them enough. Please let me know what you think, and if you could sum it up and express it a little more clearly than I have. Thanks for the conversation!
That position is called pan-psychism. I don’t think pan-psychism violates the rules of parsimony, but I also think once you find yourself asking how quantum vacuums or a baseball bats or water molecules subjectively feel, you need to back up a bit.
Personally? I think that my qualia is that which separates reality from all the other hypothetical mathematical structures, and just leave it at that (so I evaluate consciousness from a strictly information-processing standpoint).. Well, that’s the short version, I’d probably need to write a bit more for that to make sense.