That’s the point? There are no physical laws that force you to care about your pattern instead of caring about black hole.
There, you are confusing not caring about what you are with what you are being a matter of semantics.
And there are no laws that force you to define death one way or another.
There you are confusing not being forced by the laws of physics to define death correctly with death being a matter of semantics.
And no unique value-free way to infer low-level description from high-level concept—that’s ontological shift problem.
That’s potentially a good objection, but a high-level concept already has a set of properties, explanatorily prior to it being reduced. They don’t follow from our choice of reduction, rather, our choice of reduction is fixed by them.
Nothing is forcing you to equate ethically significant blacking out with “there are no instantiations of your pattern” instead of “current instantiation is destroyed.
(I assume that means having to keep the same matter.)
You’re conflating two things there—both options being equally correct and nothing forcing us to pick the other option. It’s true nothing forces us to pick the other option, but once we make an analysis of what consciousness, the continuity of consciousness and qualia are, it turns out the correct reduction is to the pattern, and not to the pattern+substance. People who pick other reductions made a mistake in their reasoning somewhere along the way.
This isn’t purely a matter of who wins in a philosophy paper. We’ll have mind uploading relatively soon, and the deaths of people who decide, to show off their wit, to define their substance as a part of them, will be as needless as someone’s who refuses to leave a burning bus because they’re defining it as a part of them.
I pretty much agree with your restatement of my position, but you didn’t present arguments for yours. Yes, I’m saying that all high-level concepts are value-laden. You didn’t specify what do you mean by “correct”, but for “corresponds to reality” it just doesn’t make sense for the definition of death to be incorrect. What do you even mean that it is incorrect, when it describes the same reality?
That’s potentially a good objection, but a high-level concept already has a set of properties, explanatorily prior to it being reduced.
Yeah, they are called “preferences” and the problem is that they are in different language.
I mean, it’s not exactly inconsistent to have a system of preferences about how you resolve ontological shifts and to call that subset of preferences “correct” and nothing is really low-level anyway, but… you do realize that mistakes in reduction are not the same things as mistakes about reality?
This isn’t purely a matter of who wins in a philosophy paper.
So is killing someone by separating them from their bus.
Yes, I’m saying that all high-level concepts are value-laden.
No, concepts are value-neutral. Caring about a concept is distinct from whether a given reduction of a concept is correct, incorrect or arbitrary.
it just doesn’t make sense for the definition of death to be incorrect
You’re confusing semantics and ontology. While all definitions are arbitrary, the ontology of any given existing thing (like consciousness) is objective. (So far it seems that in your head, the arbitrariness of semantics somehow spills over into arbitrariness of ontology, so you think you can just say that to preserve your consciousness, you need to keep the same matter, and it will really be that way).)
you do realize that mistakes in reduction are not the same things as mistakes about reality?
They are a subset of them (because by making a mistake in the first one, we’ll end up mistakenly believing incorrect things about reality (namely, that whatever we reduced our high-level concept to will behave the same way we expect our high-level concept to behave)).
While all definitions are arbitrary, the ontology of any given existing thing (like consciousness) is objective.
How does this work? There is only one objective ontology—true physics. That we don’t know it complicates things somewhat, but on our current understanding (ethically significant) consciousness is not an ontological primitive. Everything is just quantum amplitude. Nothing really changes whether you call some part of universe “consciousness” or “chair” or whatever. Your reduction can’t say things that contradict real ontology, of course—you can’t say “chair is literally these atoms and also it teleports faster than light”. But there is nothing that contradict real ontology in “I am my body”.
No, concepts are value-neutral. Caring about a concept is distinct from whether a given reduction of a concept is correct, incorrect or arbitrary.
There is no objective justification for a concept of a chair. AIXI doesn’t need to think about chairs. Like, really, try to specify correctness of a reduction of chair without appeal to usefulness.
we’ll end up mistakenly believing incorrect things about reality
Wait, but we already assumed that we are using “I am my body” definition that is correct about reality. Well, I assumed. Being incorrect means there must be some atoms in your model that are not in there real place. But “I am my body” doesn’t mean you forget that there will be another body with the same pattern at the destination of a teleporter. Or any other physical consequence. You still haven’t specified what incorrect things about atoms “I am my body” entails.
Is it about you thinking that consciousness specifically is ontologically primitive or that “blacking out” can’t be reduced to whatever you want or something and you would agree if we only talked about chairs? Otherwise I really want to see your specification of what does “correct reduction” means.
How does this work? There is only one objective ontology—true physics. That we don’t know it complicates things somewhat, but on our current understanding (ethically significant) consciousness is not an ontological primitive.
It doesn’t have to be. Aspects of the pattern are the correct ontology of consciousness. They’re not ontologically primitive, but that’s the correct ontology of consciousness. Someone else can use the word consciousness to denote something else, like a chair, but then they are no longer talking about consciousness. They didn’t start talking about chairs (instead of consciousness) because they care about chairs. They started talking about chairs because they mistakenly believe that what we call consciousness reduces to chair-shaped collections of atoms that people can sit in, and if some random good-doer found for them the mistake they made in reducing the concept of consciousness, they would agree that they made a mistake, stop caring about chair-shaped collections of atoms that people can sit in and start caring about consciousness.
Otherwise I really want to see your specification of what does “correct reduction” means.
I don’t have an explicit specification. Maybe it could be something like a process that maps a high-level concept to a lowest-level one while preserving the implicit and explicit properties of the concept.
“Different bodies have different consciousness” is an implicit property of the concept. The whole problem of changing ontology is that you can’t keep all the properties. And there is nothing except your current preferences that can decide which ones you keep.
They didn’t start talking about chairs (instead of consciousness) because they care about chairs.
In what parts (the reduction of) your concept of being mistaken is not isomorphic to caring? Or, if someone just didn’t talk about high-level concepts at all, what are your explicit and implicit properties of correctness that are not satisfied by knowing where all the atoms are and still valuing your body?
“Different bodies have different consciousness” is an implicit property of the concept.
No, it’s not, you just think it is. If you could reflect on all your beliefs, you’d come to the conclusion you were wrong. (For example, transplanting a brain to another body is something you’d (hopefully) agree preserves you. Etc.)
The whole problem of changing ontology is that you can’t keep all the properties.
Why not? All properties are reducible, so you can reduce them along with the concept.
In what parts (the reduction of) your concept of being mistaken is not isomorphic to caring?
All of them. Those are two different concepts. Being mistaken in the reduction means making a logical error at some step that a computer could point at. Caring means having a high-level (or a low-level) concept in your utility function.
Or, if someone just didn’t talk about high-level concepts at all, what are your explicit and implicit properties of correctness that are not satisfied by knowing where all the atoms are and still valuing your body?
If you mean not talking about them in the sense of not referring to them, I’d want to know how they reduced consciousness (or their personal survival) if they couldn’t refer to those concepts in the first place. If they were a computer who already started out as only referring to low-level concepts, they might not be making any mistakes, but they don’t care about the survival of anyone’s consciousness. No human is like that.
Wait, “logical” error? Like, you believe that “transplanting a brain to another body preserves you” is a theorem of QFT + ZFC or something? That… doesn’t make sense—there is no symbol for “brain” in QFT + ZFC.
No, it’s not, you just think it is. If you could reflect on all your beliefs, you’d come to the conclusion you were wrong.
How does it make that conclusion correct?
If you mean not talking about them in the sense of not referring to them, I’d want to know how they reduced consciousness (or their personal survival) if they couldn’t refer to those concepts in the first place.
I mean after they stopped believing in (and valuing) soul they switched to valuing physically correct description of their body without thinking whether it was correct reduction of a soul. And not being a computer they are not perfectly correct in their description, but the point is why not help them correct their description and make them more like a computer valuing the body? Where is mistake in that?
Or even, can you give an example of just one correct step in the reasoning about what are the real properties of a concept (of a chair or consciousness or whatever)?
Wait, “logical” error? Like, you believe that “transplanting a brain to another body preserves you” is a theorem of QFT + ZFC or something? That… doesn’t make sense—there is no symbol for “brain” in QFT + ZFC.
Why would ZFC have to play a role there? By a logical error, I had in mind committing a contradiction, an invalid implication, etc.
In other words, if you consider the explicit properties you believe the concept of yourself to have, and then you compare them against other beliefs you already hold, you’ll discover a contradiction which can be only remedied by accepting the reduction of “you” into a substanceless pattern. There is no other way.
I mean after they stopped believing in (and valuing) soul they switched to valuing physically correct description of their body without thinking whether it was correct reduction of a soul.
That’s not possible. There must’ve been some step in between, even if it wasn’t made explicit, in their decision chain. (For example, they were seeking what low-level concept would fit their high-level concept of themselves (since a soul can no longer fit the bill) and didn’t do the reduction correctly.)
Or even, can you give an example of just one correct step in the reasoning about what are the real properties of a concept (of a chair or consciousness or whatever)?
For example, that step could be imagining slowly replacing your neurons by mechanical ones performing the same function. (Then there would be subsequent steps, which would end with concluding the only possible reduction is to a substanceless pattern.)
By a logical error, I had in mind committing a contradiction, an invalid implication, etc.
Implication from what? There is no chain of implications that starts with “I think I value me” and “everything is atoms” and ends with “transplanting a brain to another body preserves me”. Unless you already have laws for how you reduce things.
In other words, if you consider the explicit properties you believe the concept of yourself to have, and then you compare them against other beliefs you already hold, you’ll discover a contradiction which can be only remedied by accepting the reduction of “you” into a substanceless pattern. There is no other way.
Your beliefs and explicit properties are in different ontologies—there is no law for comparing them. If your current reduction of yourself contradicts your beliefs, you can change your reduction. Yes, a substanceless pattern is a valid change of reduction. But a vast space of other reductions is also contradiction-free (physically possible, in other words) - “I am my body” doesn’t require atoms to be in wrong places. You didn’t present an example of where it does, so you agree, right?
If by “beliefs” you mean high-level approximations, like in addition to “I am my body”, you have “I remain myself after sleep” and then you figure out atoms and start to use “the body after sleep is not really the same”, then obviously there are many other ways to resolve this instead of “I am substanceless pattern”. There is nothing preventing you from saying that “body” should mean different things in “I am my body” and “the body after sleep”, you can conclude that you are not you after sleep—why is one explicit property is better than another if excluding either solves the contradiction? Like I said, is it about consciousness specifically, where you think people can’t be wrong about what way point to when they think about blacking out? Because it’s totally possible to be wrong about your consciousness.
“To be sure, Fading Qualia may be logically possible. Arguably, there is no contradiction in the notion of a system that is so wrong about its experiences.” So, by “correct” you mean “doesn’t feel implausible”? Or what else makes imagining slowly replacing your neurons “correct”?
I mean, where did you even got the idea that it is possible to derive anything ethical using only correctness? That’s is/ought distinction, isn’t it?
There is no chain of implications that starts with “I think I value me” and “everything is atoms” and ends with “transplanting a brain to another body preserves me”.
Right, you need more than those two statements. (Also, the first one doesn’t actually help—it doesn’t matter to the conclusion if you value yourself or not.)
contradiction-free (physically possible, in other words)
Contradiction-free doesn’t mean physically possible.
“I am my body” doesn’t require atoms to be in wrong places
Right. The contradiction is in your brain in the form of the data encoded there. It’s not an incorrect belief about where atoms are.
in addition to “I am my body”, you have “I remain myself after sleep” and then you figure out atoms and start to use “the body after sleep is not really the same”, then obviously there are many other ways to resolve this instead of “I am substanceless pattern”.
There are. The problem is that there is more than one (“I remain myself after sleep”) statement and if you consider all of them together, there is no longer another way.
you can conclude that you are not you after sleep
You can’t. Nobody can actually believe that.
Like I said, is it about consciousness specifically, where you think people can’t be wrong about what way point to when they think about blacking out? Because it’s totally possible to be wrong about your consciousness.
People can be wrong when doing this sort of reasoning, but the solution isn’t to postulate the answer by an axiom. The solution is to be really careful about the reasoning.
“To be sure, Fading Qualia may be logically possible. Arguably, there is no contradiction in the notion of a system that is so wrong about its experiences.” So, by “correct” you mean “doesn’t feel implausible”?
That would require some extremely convoluted theory of consciousness that nobody could believe. (For example, it would contradict one of the things you said previously, where a consciousness belongs to a macroscopic, spatially extended object (like a human body), and that’s what makes the object experience that consciousness. (That wouldn’t be possible on the theory of Fading Qualia (because Joe from the thought experiment doesn’t have fully functioning consciousness even though both the consciousness and his body function correctly, etc.).))
I mean, where did you even got the idea that it is possible to derive anything ethical using only correctness?
The problem is that there is more than one (“I remain myself after sleep”) statement and if you consider all of them together, there is no longer another way.
Well, yes, there are other statements—“I am my body” and “I remain myself after sleep” are among them. If your way allows contradicting “I am my body” then it’s not the only contradiction-free way, and other ways (that contradict other initial statements) are on the same footing. At least as far as logic goes.
The contradiction is in your brain in the form of the data encoded there. It’s not an incorrect belief about where atoms are.
Then patternist identity encodes contradiction to “I am my body” in the same way. And if your choice of statements to contradict is not determined by either logic or beliefs about atoms, then it is determined by your preferences. There is just not much other kinds of stuff in the universe.
That would require some extremely convoluted theory of consciousness that nobody could believe.
So like I said, requirement for a theory of consciousness to not be convoluted is just your preference. Just like any definition of what it means for someone to actually believe something—it’s not logic that forces you, because as long as you have a contradiction anyway, you can say that someone was wrong about themselves not believing in convoluted theory of consciousness—and not knowledge about reality. That’s why it’s about ethics. Or why do you thing someone should prefer non-convoluted theory of consciousness?
For example, it would contradict one of the things you said previously, where a consciousness belongs to a macroscopic, spatially extended object (like a human body), and that’s what makes the object experience that consciousness.
Nah, you can just always make it more convoluted^^. For example I could say that usually microscopic changes are safe, but changing neurons into silicon is too much and destroys consciousness.
Well, yes, there are other statements—“I am my body” and “I remain myself after sleep” are among them.
The first one can’t be there. If you put it there and then we add everything else, there will be some statements we’re psychologically incapable of disbelieving, and us being our body isn’t among them, so it is that statement that will have to go.
And if your choice of statements to contradict is not determined by either logic or beliefs about atoms, then it is determined by your preferences. There is just not much other kinds of stuff in the universe.
There is a fourth kind of data—namely, what our psychological makeup determines we’re capable of believing. (Those aren’t our preferences.)
So like I said, requirement for a theory of consciousness to not be convoluted is just your preference.
Right, but the key part there isn’t that it’s convoluted, but that we’re incapable of believing it.
Caring about what our psychological makeup determines we’re capable of believing, instead of partially operating only on surface reasoning until you change your psychological makeup, is a preference. It’s not a law that you must believe things in whatever sense you mean it for these things to matter. It may be useful for acquiring knowledge, but it’s not “correct” to always do everything that maximally helps your brain know true things. It’s not avoiding mistakes—it’s just selling your soul for knowledge.
Caring about what our psychological makeup determines we’re capable of believing, instead of partially operating only on surface reasoning until you change your psychological makeup, is a preference.
You can’t change your psychological makeup to allow you to hold a self-consistent system of beliefs that would include the belief that you are your body. Even if you could (which you can’t), you haven’t done it yet, so you can’t currently hold such a system of beliefs.
It’s not a law that you must believe things in whatever sense you mean it for these things to matter.
If you don’t believe any system of statements that includes that you are your body, then you have no reason to avoid a mind upload or a teleporter.
If you want to declare that you have null beliefs about what you are and say that you only care about your physical body (instead of believing that that is you), that’s not possible. Humans don’t psychologically work like that.
it’s not “correct” to always do everything that maximally helps your brain know true things
There are 2 confusions there:
There, you are confusing not caring about what you are with what you are being a matter of semantics.
There you are confusing not being forced by the laws of physics to define death correctly with death being a matter of semantics.
That’s potentially a good objection, but a high-level concept already has a set of properties, explanatorily prior to it being reduced. They don’t follow from our choice of reduction, rather, our choice of reduction is fixed by them.
(I assume that means having to keep the same matter.)
You’re conflating two things there—both options being equally correct and nothing forcing us to pick the other option. It’s true nothing forces us to pick the other option, but once we make an analysis of what consciousness, the continuity of consciousness and qualia are, it turns out the correct reduction is to the pattern, and not to the pattern+substance. People who pick other reductions made a mistake in their reasoning somewhere along the way.
This isn’t purely a matter of who wins in a philosophy paper. We’ll have mind uploading relatively soon, and the deaths of people who decide, to show off their wit, to define their substance as a part of them, will be as needless as someone’s who refuses to leave a burning bus because they’re defining it as a part of them.
I pretty much agree with your restatement of my position, but you didn’t present arguments for yours. Yes, I’m saying that all high-level concepts are value-laden. You didn’t specify what do you mean by “correct”, but for “corresponds to reality” it just doesn’t make sense for the definition of death to be incorrect. What do you even mean that it is incorrect, when it describes the same reality?
Yeah, they are called “preferences” and the problem is that they are in different language.
I mean, it’s not exactly inconsistent to have a system of preferences about how you resolve ontological shifts and to call that subset of preferences “correct” and nothing is really low-level anyway, but… you do realize that mistakes in reduction are not the same things as mistakes about reality?
So is killing someone by separating them from their bus.
No, concepts are value-neutral. Caring about a concept is distinct from whether a given reduction of a concept is correct, incorrect or arbitrary.
You’re confusing semantics and ontology. While all definitions are arbitrary, the ontology of any given existing thing (like consciousness) is objective. (So far it seems that in your head, the arbitrariness of semantics somehow spills over into arbitrariness of ontology, so you think you can just say that to preserve your consciousness, you need to keep the same matter, and it will really be that way).)
They are a subset of them (because by making a mistake in the first one, we’ll end up mistakenly believing incorrect things about reality (namely, that whatever we reduced our high-level concept to will behave the same way we expect our high-level concept to behave)).
How does this work? There is only one objective ontology—true physics. That we don’t know it complicates things somewhat, but on our current understanding (ethically significant) consciousness is not an ontological primitive. Everything is just quantum amplitude. Nothing really changes whether you call some part of universe “consciousness” or “chair” or whatever. Your reduction can’t say things that contradict real ontology, of course—you can’t say “chair is literally these atoms and also it teleports faster than light”. But there is nothing that contradict real ontology in “I am my body”.
There is no objective justification for a concept of a chair. AIXI doesn’t need to think about chairs. Like, really, try to specify correctness of a reduction of chair without appeal to usefulness.
Wait, but we already assumed that we are using “I am my body” definition that is correct about reality. Well, I assumed. Being incorrect means there must be some atoms in your model that are not in there real place. But “I am my body” doesn’t mean you forget that there will be another body with the same pattern at the destination of a teleporter. Or any other physical consequence. You still haven’t specified what incorrect things about atoms “I am my body” entails.
Is it about you thinking that consciousness specifically is ontologically primitive or that “blacking out” can’t be reduced to whatever you want or something and you would agree if we only talked about chairs? Otherwise I really want to see your specification of what does “correct reduction” means.
It doesn’t have to be. Aspects of the pattern are the correct ontology of consciousness. They’re not ontologically primitive, but that’s the correct ontology of consciousness. Someone else can use the word consciousness to denote something else, like a chair, but then they are no longer talking about consciousness. They didn’t start talking about chairs (instead of consciousness) because they care about chairs. They started talking about chairs because they mistakenly believe that what we call consciousness reduces to chair-shaped collections of atoms that people can sit in, and if some random good-doer found for them the mistake they made in reducing the concept of consciousness, they would agree that they made a mistake, stop caring about chair-shaped collections of atoms that people can sit in and start caring about consciousness.
I don’t have an explicit specification. Maybe it could be something like a process that maps a high-level concept to a lowest-level one while preserving the implicit and explicit properties of the concept.
“Different bodies have different consciousness” is an implicit property of the concept. The whole problem of changing ontology is that you can’t keep all the properties. And there is nothing except your current preferences that can decide which ones you keep.
In what parts (the reduction of) your concept of being mistaken is not isomorphic to caring? Or, if someone just didn’t talk about high-level concepts at all, what are your explicit and implicit properties of correctness that are not satisfied by knowing where all the atoms are and still valuing your body?
No, it’s not, you just think it is. If you could reflect on all your beliefs, you’d come to the conclusion you were wrong. (For example, transplanting a brain to another body is something you’d (hopefully) agree preserves you. Etc.)
Why not? All properties are reducible, so you can reduce them along with the concept.
All of them. Those are two different concepts. Being mistaken in the reduction means making a logical error at some step that a computer could point at. Caring means having a high-level (or a low-level) concept in your utility function.
If you mean not talking about them in the sense of not referring to them, I’d want to know how they reduced consciousness (or their personal survival) if they couldn’t refer to those concepts in the first place. If they were a computer who already started out as only referring to low-level concepts, they might not be making any mistakes, but they don’t care about the survival of anyone’s consciousness. No human is like that.
Wait, “logical” error? Like, you believe that “transplanting a brain to another body preserves you” is a theorem of QFT + ZFC or something? That… doesn’t make sense—there is no symbol for “brain” in QFT + ZFC.
How does it make that conclusion correct?
I mean after they stopped believing in (and valuing) soul they switched to valuing physically correct description of their body without thinking whether it was correct reduction of a soul. And not being a computer they are not perfectly correct in their description, but the point is why not help them correct their description and make them more like a computer valuing the body? Where is mistake in that?
Or even, can you give an example of just one correct step in the reasoning about what are the real properties of a concept (of a chair or consciousness or whatever)?
Why would ZFC have to play a role there? By a logical error, I had in mind committing a contradiction, an invalid implication, etc.
In other words, if you consider the explicit properties you believe the concept of yourself to have, and then you compare them against other beliefs you already hold, you’ll discover a contradiction which can be only remedied by accepting the reduction of “you” into a substanceless pattern. There is no other way.
That’s not possible. There must’ve been some step in between, even if it wasn’t made explicit, in their decision chain. (For example, they were seeking what low-level concept would fit their high-level concept of themselves (since a soul can no longer fit the bill) and didn’t do the reduction correctly.)
For example, that step could be imagining slowly replacing your neurons by mechanical ones performing the same function. (Then there would be subsequent steps, which would end with concluding the only possible reduction is to a substanceless pattern.)
Implication from what? There is no chain of implications that starts with “I think I value me” and “everything is atoms” and ends with “transplanting a brain to another body preserves me”. Unless you already have laws for how you reduce things.
Your beliefs and explicit properties are in different ontologies—there is no law for comparing them. If your current reduction of yourself contradicts your beliefs, you can change your reduction. Yes, a substanceless pattern is a valid change of reduction. But a vast space of other reductions is also contradiction-free (physically possible, in other words) - “I am my body” doesn’t require atoms to be in wrong places. You didn’t present an example of where it does, so you agree, right?
If by “beliefs” you mean high-level approximations, like in addition to “I am my body”, you have “I remain myself after sleep” and then you figure out atoms and start to use “the body after sleep is not really the same”, then obviously there are many other ways to resolve this instead of “I am substanceless pattern”. There is nothing preventing you from saying that “body” should mean different things in “I am my body” and “the body after sleep”, you can conclude that you are not you after sleep—why is one explicit property is better than another if excluding either solves the contradiction? Like I said, is it about consciousness specifically, where you think people can’t be wrong about what way point to when they think about blacking out? Because it’s totally possible to be wrong about your consciousness.
“To be sure, Fading Qualia may be logically possible. Arguably, there is no contradiction in the notion of a system that is so wrong about its experiences.” So, by “correct” you mean “doesn’t feel implausible”? Or what else makes imagining slowly replacing your neurons “correct”?
I mean, where did you even got the idea that it is possible to derive anything ethical using only correctness? That’s is/ought distinction, isn’t it?
Right, you need more than those two statements. (Also, the first one doesn’t actually help—it doesn’t matter to the conclusion if you value yourself or not.)
Contradiction-free doesn’t mean physically possible.
Right. The contradiction is in your brain in the form of the data encoded there. It’s not an incorrect belief about where atoms are.
There are. The problem is that there is more than one (“I remain myself after sleep”) statement and if you consider all of them together, there is no longer another way.
You can’t. Nobody can actually believe that.
People can be wrong when doing this sort of reasoning, but the solution isn’t to postulate the answer by an axiom. The solution is to be really careful about the reasoning.
That would require some extremely convoluted theory of consciousness that nobody could believe. (For example, it would contradict one of the things you said previously, where a consciousness belongs to a macroscopic, spatially extended object (like a human body), and that’s what makes the object experience that consciousness. (That wouldn’t be possible on the theory of Fading Qualia (because Joe from the thought experiment doesn’t have fully functioning consciousness even though both the consciousness and his body function correctly, etc.).))
Oh, I don’t have ethics in mind there.
Well, yes, there are other statements—“I am my body” and “I remain myself after sleep” are among them. If your way allows contradicting “I am my body” then it’s not the only contradiction-free way, and other ways (that contradict other initial statements) are on the same footing. At least as far as logic goes.
Then patternist identity encodes contradiction to “I am my body” in the same way. And if your choice of statements to contradict is not determined by either logic or beliefs about atoms, then it is determined by your preferences. There is just not much other kinds of stuff in the universe.
So like I said, requirement for a theory of consciousness to not be convoluted is just your preference. Just like any definition of what it means for someone to actually believe something—it’s not logic that forces you, because as long as you have a contradiction anyway, you can say that someone was wrong about themselves not believing in convoluted theory of consciousness—and not knowledge about reality. That’s why it’s about ethics. Or why do you thing someone should prefer non-convoluted theory of consciousness?
Nah, you can just always make it more convoluted^^. For example I could say that usually microscopic changes are safe, but changing neurons into silicon is too much and destroys consciousness.
The first one can’t be there. If you put it there and then we add everything else, there will be some statements we’re psychologically incapable of disbelieving, and us being our body isn’t among them, so it is that statement that will have to go.
There is a fourth kind of data—namely, what our psychological makeup determines we’re capable of believing. (Those aren’t our preferences.)
Right, but the key part there isn’t that it’s convoluted, but that we’re incapable of believing it.
Caring about what our psychological makeup determines we’re capable of believing, instead of partially operating only on surface reasoning until you change your psychological makeup, is a preference. It’s not a law that you must believe things in whatever sense you mean it for these things to matter. It may be useful for acquiring knowledge, but it’s not “correct” to always do everything that maximally helps your brain know true things. It’s not avoiding mistakes—it’s just selling your soul for knowledge.
You can’t change your psychological makeup to allow you to hold a self-consistent system of beliefs that would include the belief that you are your body. Even if you could (which you can’t), you haven’t done it yet, so you can’t currently hold such a system of beliefs.
If you don’t believe any system of statements that includes that you are your body, then you have no reason to avoid a mind upload or a teleporter.
If you want to declare that you have null beliefs about what you are and say that you only care about your physical body (instead of believing that that is you), that’s not possible. Humans don’t psychologically work like that.
You can’t avoid that. By the time you are avoiding doing something that would maximally help you know the truth, you already know your current belief is false.