How does this work? There is only one objective ontology—true physics. That we don’t know it complicates things somewhat, but on our current understanding (ethically significant) consciousness is not an ontological primitive.
It doesn’t have to be. Aspects of the pattern are the correct ontology of consciousness. They’re not ontologically primitive, but that’s the correct ontology of consciousness. Someone else can use the word consciousness to denote something else, like a chair, but then they are no longer talking about consciousness. They didn’t start talking about chairs (instead of consciousness) because they care about chairs. They started talking about chairs because they mistakenly believe that what we call consciousness reduces to chair-shaped collections of atoms that people can sit in, and if some random good-doer found for them the mistake they made in reducing the concept of consciousness, they would agree that they made a mistake, stop caring about chair-shaped collections of atoms that people can sit in and start caring about consciousness.
Otherwise I really want to see your specification of what does “correct reduction” means.
I don’t have an explicit specification. Maybe it could be something like a process that maps a high-level concept to a lowest-level one while preserving the implicit and explicit properties of the concept.
“Different bodies have different consciousness” is an implicit property of the concept. The whole problem of changing ontology is that you can’t keep all the properties. And there is nothing except your current preferences that can decide which ones you keep.
They didn’t start talking about chairs (instead of consciousness) because they care about chairs.
In what parts (the reduction of) your concept of being mistaken is not isomorphic to caring? Or, if someone just didn’t talk about high-level concepts at all, what are your explicit and implicit properties of correctness that are not satisfied by knowing where all the atoms are and still valuing your body?
“Different bodies have different consciousness” is an implicit property of the concept.
No, it’s not, you just think it is. If you could reflect on all your beliefs, you’d come to the conclusion you were wrong. (For example, transplanting a brain to another body is something you’d (hopefully) agree preserves you. Etc.)
The whole problem of changing ontology is that you can’t keep all the properties.
Why not? All properties are reducible, so you can reduce them along with the concept.
In what parts (the reduction of) your concept of being mistaken is not isomorphic to caring?
All of them. Those are two different concepts. Being mistaken in the reduction means making a logical error at some step that a computer could point at. Caring means having a high-level (or a low-level) concept in your utility function.
Or, if someone just didn’t talk about high-level concepts at all, what are your explicit and implicit properties of correctness that are not satisfied by knowing where all the atoms are and still valuing your body?
If you mean not talking about them in the sense of not referring to them, I’d want to know how they reduced consciousness (or their personal survival) if they couldn’t refer to those concepts in the first place. If they were a computer who already started out as only referring to low-level concepts, they might not be making any mistakes, but they don’t care about the survival of anyone’s consciousness. No human is like that.
Wait, “logical” error? Like, you believe that “transplanting a brain to another body preserves you” is a theorem of QFT + ZFC or something? That… doesn’t make sense—there is no symbol for “brain” in QFT + ZFC.
No, it’s not, you just think it is. If you could reflect on all your beliefs, you’d come to the conclusion you were wrong.
How does it make that conclusion correct?
If you mean not talking about them in the sense of not referring to them, I’d want to know how they reduced consciousness (or their personal survival) if they couldn’t refer to those concepts in the first place.
I mean after they stopped believing in (and valuing) soul they switched to valuing physically correct description of their body without thinking whether it was correct reduction of a soul. And not being a computer they are not perfectly correct in their description, but the point is why not help them correct their description and make them more like a computer valuing the body? Where is mistake in that?
Or even, can you give an example of just one correct step in the reasoning about what are the real properties of a concept (of a chair or consciousness or whatever)?
Wait, “logical” error? Like, you believe that “transplanting a brain to another body preserves you” is a theorem of QFT + ZFC or something? That… doesn’t make sense—there is no symbol for “brain” in QFT + ZFC.
Why would ZFC have to play a role there? By a logical error, I had in mind committing a contradiction, an invalid implication, etc.
In other words, if you consider the explicit properties you believe the concept of yourself to have, and then you compare them against other beliefs you already hold, you’ll discover a contradiction which can be only remedied by accepting the reduction of “you” into a substanceless pattern. There is no other way.
I mean after they stopped believing in (and valuing) soul they switched to valuing physically correct description of their body without thinking whether it was correct reduction of a soul.
That’s not possible. There must’ve been some step in between, even if it wasn’t made explicit, in their decision chain. (For example, they were seeking what low-level concept would fit their high-level concept of themselves (since a soul can no longer fit the bill) and didn’t do the reduction correctly.)
Or even, can you give an example of just one correct step in the reasoning about what are the real properties of a concept (of a chair or consciousness or whatever)?
For example, that step could be imagining slowly replacing your neurons by mechanical ones performing the same function. (Then there would be subsequent steps, which would end with concluding the only possible reduction is to a substanceless pattern.)
By a logical error, I had in mind committing a contradiction, an invalid implication, etc.
Implication from what? There is no chain of implications that starts with “I think I value me” and “everything is atoms” and ends with “transplanting a brain to another body preserves me”. Unless you already have laws for how you reduce things.
In other words, if you consider the explicit properties you believe the concept of yourself to have, and then you compare them against other beliefs you already hold, you’ll discover a contradiction which can be only remedied by accepting the reduction of “you” into a substanceless pattern. There is no other way.
Your beliefs and explicit properties are in different ontologies—there is no law for comparing them. If your current reduction of yourself contradicts your beliefs, you can change your reduction. Yes, a substanceless pattern is a valid change of reduction. But a vast space of other reductions is also contradiction-free (physically possible, in other words) - “I am my body” doesn’t require atoms to be in wrong places. You didn’t present an example of where it does, so you agree, right?
If by “beliefs” you mean high-level approximations, like in addition to “I am my body”, you have “I remain myself after sleep” and then you figure out atoms and start to use “the body after sleep is not really the same”, then obviously there are many other ways to resolve this instead of “I am substanceless pattern”. There is nothing preventing you from saying that “body” should mean different things in “I am my body” and “the body after sleep”, you can conclude that you are not you after sleep—why is one explicit property is better than another if excluding either solves the contradiction? Like I said, is it about consciousness specifically, where you think people can’t be wrong about what way point to when they think about blacking out? Because it’s totally possible to be wrong about your consciousness.
“To be sure, Fading Qualia may be logically possible. Arguably, there is no contradiction in the notion of a system that is so wrong about its experiences.” So, by “correct” you mean “doesn’t feel implausible”? Or what else makes imagining slowly replacing your neurons “correct”?
I mean, where did you even got the idea that it is possible to derive anything ethical using only correctness? That’s is/ought distinction, isn’t it?
There is no chain of implications that starts with “I think I value me” and “everything is atoms” and ends with “transplanting a brain to another body preserves me”.
Right, you need more than those two statements. (Also, the first one doesn’t actually help—it doesn’t matter to the conclusion if you value yourself or not.)
contradiction-free (physically possible, in other words)
Contradiction-free doesn’t mean physically possible.
“I am my body” doesn’t require atoms to be in wrong places
Right. The contradiction is in your brain in the form of the data encoded there. It’s not an incorrect belief about where atoms are.
in addition to “I am my body”, you have “I remain myself after sleep” and then you figure out atoms and start to use “the body after sleep is not really the same”, then obviously there are many other ways to resolve this instead of “I am substanceless pattern”.
There are. The problem is that there is more than one (“I remain myself after sleep”) statement and if you consider all of them together, there is no longer another way.
you can conclude that you are not you after sleep
You can’t. Nobody can actually believe that.
Like I said, is it about consciousness specifically, where you think people can’t be wrong about what way point to when they think about blacking out? Because it’s totally possible to be wrong about your consciousness.
People can be wrong when doing this sort of reasoning, but the solution isn’t to postulate the answer by an axiom. The solution is to be really careful about the reasoning.
“To be sure, Fading Qualia may be logically possible. Arguably, there is no contradiction in the notion of a system that is so wrong about its experiences.” So, by “correct” you mean “doesn’t feel implausible”?
That would require some extremely convoluted theory of consciousness that nobody could believe. (For example, it would contradict one of the things you said previously, where a consciousness belongs to a macroscopic, spatially extended object (like a human body), and that’s what makes the object experience that consciousness. (That wouldn’t be possible on the theory of Fading Qualia (because Joe from the thought experiment doesn’t have fully functioning consciousness even though both the consciousness and his body function correctly, etc.).))
I mean, where did you even got the idea that it is possible to derive anything ethical using only correctness?
The problem is that there is more than one (“I remain myself after sleep”) statement and if you consider all of them together, there is no longer another way.
Well, yes, there are other statements—“I am my body” and “I remain myself after sleep” are among them. If your way allows contradicting “I am my body” then it’s not the only contradiction-free way, and other ways (that contradict other initial statements) are on the same footing. At least as far as logic goes.
The contradiction is in your brain in the form of the data encoded there. It’s not an incorrect belief about where atoms are.
Then patternist identity encodes contradiction to “I am my body” in the same way. And if your choice of statements to contradict is not determined by either logic or beliefs about atoms, then it is determined by your preferences. There is just not much other kinds of stuff in the universe.
That would require some extremely convoluted theory of consciousness that nobody could believe.
So like I said, requirement for a theory of consciousness to not be convoluted is just your preference. Just like any definition of what it means for someone to actually believe something—it’s not logic that forces you, because as long as you have a contradiction anyway, you can say that someone was wrong about themselves not believing in convoluted theory of consciousness—and not knowledge about reality. That’s why it’s about ethics. Or why do you thing someone should prefer non-convoluted theory of consciousness?
For example, it would contradict one of the things you said previously, where a consciousness belongs to a macroscopic, spatially extended object (like a human body), and that’s what makes the object experience that consciousness.
Nah, you can just always make it more convoluted^^. For example I could say that usually microscopic changes are safe, but changing neurons into silicon is too much and destroys consciousness.
Well, yes, there are other statements—“I am my body” and “I remain myself after sleep” are among them.
The first one can’t be there. If you put it there and then we add everything else, there will be some statements we’re psychologically incapable of disbelieving, and us being our body isn’t among them, so it is that statement that will have to go.
And if your choice of statements to contradict is not determined by either logic or beliefs about atoms, then it is determined by your preferences. There is just not much other kinds of stuff in the universe.
There is a fourth kind of data—namely, what our psychological makeup determines we’re capable of believing. (Those aren’t our preferences.)
So like I said, requirement for a theory of consciousness to not be convoluted is just your preference.
Right, but the key part there isn’t that it’s convoluted, but that we’re incapable of believing it.
Caring about what our psychological makeup determines we’re capable of believing, instead of partially operating only on surface reasoning until you change your psychological makeup, is a preference. It’s not a law that you must believe things in whatever sense you mean it for these things to matter. It may be useful for acquiring knowledge, but it’s not “correct” to always do everything that maximally helps your brain know true things. It’s not avoiding mistakes—it’s just selling your soul for knowledge.
Caring about what our psychological makeup determines we’re capable of believing, instead of partially operating only on surface reasoning until you change your psychological makeup, is a preference.
You can’t change your psychological makeup to allow you to hold a self-consistent system of beliefs that would include the belief that you are your body. Even if you could (which you can’t), you haven’t done it yet, so you can’t currently hold such a system of beliefs.
It’s not a law that you must believe things in whatever sense you mean it for these things to matter.
If you don’t believe any system of statements that includes that you are your body, then you have no reason to avoid a mind upload or a teleporter.
If you want to declare that you have null beliefs about what you are and say that you only care about your physical body (instead of believing that that is you), that’s not possible. Humans don’t psychologically work like that.
it’s not “correct” to always do everything that maximally helps your brain know true things
It doesn’t have to be. Aspects of the pattern are the correct ontology of consciousness. They’re not ontologically primitive, but that’s the correct ontology of consciousness. Someone else can use the word consciousness to denote something else, like a chair, but then they are no longer talking about consciousness. They didn’t start talking about chairs (instead of consciousness) because they care about chairs. They started talking about chairs because they mistakenly believe that what we call consciousness reduces to chair-shaped collections of atoms that people can sit in, and if some random good-doer found for them the mistake they made in reducing the concept of consciousness, they would agree that they made a mistake, stop caring about chair-shaped collections of atoms that people can sit in and start caring about consciousness.
I don’t have an explicit specification. Maybe it could be something like a process that maps a high-level concept to a lowest-level one while preserving the implicit and explicit properties of the concept.
“Different bodies have different consciousness” is an implicit property of the concept. The whole problem of changing ontology is that you can’t keep all the properties. And there is nothing except your current preferences that can decide which ones you keep.
In what parts (the reduction of) your concept of being mistaken is not isomorphic to caring? Or, if someone just didn’t talk about high-level concepts at all, what are your explicit and implicit properties of correctness that are not satisfied by knowing where all the atoms are and still valuing your body?
No, it’s not, you just think it is. If you could reflect on all your beliefs, you’d come to the conclusion you were wrong. (For example, transplanting a brain to another body is something you’d (hopefully) agree preserves you. Etc.)
Why not? All properties are reducible, so you can reduce them along with the concept.
All of them. Those are two different concepts. Being mistaken in the reduction means making a logical error at some step that a computer could point at. Caring means having a high-level (or a low-level) concept in your utility function.
If you mean not talking about them in the sense of not referring to them, I’d want to know how they reduced consciousness (or their personal survival) if they couldn’t refer to those concepts in the first place. If they were a computer who already started out as only referring to low-level concepts, they might not be making any mistakes, but they don’t care about the survival of anyone’s consciousness. No human is like that.
Wait, “logical” error? Like, you believe that “transplanting a brain to another body preserves you” is a theorem of QFT + ZFC or something? That… doesn’t make sense—there is no symbol for “brain” in QFT + ZFC.
How does it make that conclusion correct?
I mean after they stopped believing in (and valuing) soul they switched to valuing physically correct description of their body without thinking whether it was correct reduction of a soul. And not being a computer they are not perfectly correct in their description, but the point is why not help them correct their description and make them more like a computer valuing the body? Where is mistake in that?
Or even, can you give an example of just one correct step in the reasoning about what are the real properties of a concept (of a chair or consciousness or whatever)?
Why would ZFC have to play a role there? By a logical error, I had in mind committing a contradiction, an invalid implication, etc.
In other words, if you consider the explicit properties you believe the concept of yourself to have, and then you compare them against other beliefs you already hold, you’ll discover a contradiction which can be only remedied by accepting the reduction of “you” into a substanceless pattern. There is no other way.
That’s not possible. There must’ve been some step in between, even if it wasn’t made explicit, in their decision chain. (For example, they were seeking what low-level concept would fit their high-level concept of themselves (since a soul can no longer fit the bill) and didn’t do the reduction correctly.)
For example, that step could be imagining slowly replacing your neurons by mechanical ones performing the same function. (Then there would be subsequent steps, which would end with concluding the only possible reduction is to a substanceless pattern.)
Implication from what? There is no chain of implications that starts with “I think I value me” and “everything is atoms” and ends with “transplanting a brain to another body preserves me”. Unless you already have laws for how you reduce things.
Your beliefs and explicit properties are in different ontologies—there is no law for comparing them. If your current reduction of yourself contradicts your beliefs, you can change your reduction. Yes, a substanceless pattern is a valid change of reduction. But a vast space of other reductions is also contradiction-free (physically possible, in other words) - “I am my body” doesn’t require atoms to be in wrong places. You didn’t present an example of where it does, so you agree, right?
If by “beliefs” you mean high-level approximations, like in addition to “I am my body”, you have “I remain myself after sleep” and then you figure out atoms and start to use “the body after sleep is not really the same”, then obviously there are many other ways to resolve this instead of “I am substanceless pattern”. There is nothing preventing you from saying that “body” should mean different things in “I am my body” and “the body after sleep”, you can conclude that you are not you after sleep—why is one explicit property is better than another if excluding either solves the contradiction? Like I said, is it about consciousness specifically, where you think people can’t be wrong about what way point to when they think about blacking out? Because it’s totally possible to be wrong about your consciousness.
“To be sure, Fading Qualia may be logically possible. Arguably, there is no contradiction in the notion of a system that is so wrong about its experiences.” So, by “correct” you mean “doesn’t feel implausible”? Or what else makes imagining slowly replacing your neurons “correct”?
I mean, where did you even got the idea that it is possible to derive anything ethical using only correctness? That’s is/ought distinction, isn’t it?
Right, you need more than those two statements. (Also, the first one doesn’t actually help—it doesn’t matter to the conclusion if you value yourself or not.)
Contradiction-free doesn’t mean physically possible.
Right. The contradiction is in your brain in the form of the data encoded there. It’s not an incorrect belief about where atoms are.
There are. The problem is that there is more than one (“I remain myself after sleep”) statement and if you consider all of them together, there is no longer another way.
You can’t. Nobody can actually believe that.
People can be wrong when doing this sort of reasoning, but the solution isn’t to postulate the answer by an axiom. The solution is to be really careful about the reasoning.
That would require some extremely convoluted theory of consciousness that nobody could believe. (For example, it would contradict one of the things you said previously, where a consciousness belongs to a macroscopic, spatially extended object (like a human body), and that’s what makes the object experience that consciousness. (That wouldn’t be possible on the theory of Fading Qualia (because Joe from the thought experiment doesn’t have fully functioning consciousness even though both the consciousness and his body function correctly, etc.).))
Oh, I don’t have ethics in mind there.
Well, yes, there are other statements—“I am my body” and “I remain myself after sleep” are among them. If your way allows contradicting “I am my body” then it’s not the only contradiction-free way, and other ways (that contradict other initial statements) are on the same footing. At least as far as logic goes.
Then patternist identity encodes contradiction to “I am my body” in the same way. And if your choice of statements to contradict is not determined by either logic or beliefs about atoms, then it is determined by your preferences. There is just not much other kinds of stuff in the universe.
So like I said, requirement for a theory of consciousness to not be convoluted is just your preference. Just like any definition of what it means for someone to actually believe something—it’s not logic that forces you, because as long as you have a contradiction anyway, you can say that someone was wrong about themselves not believing in convoluted theory of consciousness—and not knowledge about reality. That’s why it’s about ethics. Or why do you thing someone should prefer non-convoluted theory of consciousness?
Nah, you can just always make it more convoluted^^. For example I could say that usually microscopic changes are safe, but changing neurons into silicon is too much and destroys consciousness.
The first one can’t be there. If you put it there and then we add everything else, there will be some statements we’re psychologically incapable of disbelieving, and us being our body isn’t among them, so it is that statement that will have to go.
There is a fourth kind of data—namely, what our psychological makeup determines we’re capable of believing. (Those aren’t our preferences.)
Right, but the key part there isn’t that it’s convoluted, but that we’re incapable of believing it.
Caring about what our psychological makeup determines we’re capable of believing, instead of partially operating only on surface reasoning until you change your psychological makeup, is a preference. It’s not a law that you must believe things in whatever sense you mean it for these things to matter. It may be useful for acquiring knowledge, but it’s not “correct” to always do everything that maximally helps your brain know true things. It’s not avoiding mistakes—it’s just selling your soul for knowledge.
You can’t change your psychological makeup to allow you to hold a self-consistent system of beliefs that would include the belief that you are your body. Even if you could (which you can’t), you haven’t done it yet, so you can’t currently hold such a system of beliefs.
If you don’t believe any system of statements that includes that you are your body, then you have no reason to avoid a mind upload or a teleporter.
If you want to declare that you have null beliefs about what you are and say that you only care about your physical body (instead of believing that that is you), that’s not possible. Humans don’t psychologically work like that.
You can’t avoid that. By the time you are avoiding doing something that would maximally help you know the truth, you already know your current belief is false.