Those are both good, coherent answers. I’ll assume from your lack of comment on (3) and (4) that you currently find them difficult, so I’ll aim in that direction.
Here is the one point I want to expand upon a little. You said that the reason that you “feel” like you have free will is the following:
We animals ascribe agency to all kinds of wacky shit, including these bodies. Hence, the ego.
Now, that explains why we assign souls to other people and to clearly non-conscious things like rain and disease. It also explains much of the belief in God. But does it explain why we assign souls to ourselves? How do you justify to yourself the fact that you can personally feel your thoughts, emotions, and sensory input?
If you think you know the answer to this, or if you think that’s a silly question in the first place, elaborate. If you think it’s a reasonable and difficult question, or if you think the question is unanswerable, we’ll come back to it later.
Oh, also: I believe that this question is the puzzle that lies at the crux of the B.I.A.S. Please elaborate if you disagree with me on that.
---Anyway, moving on.
You didn’t define free will like I asked, but that’s okay—it indicates that you are implicitly using a definition of free will which is impossible in any logical universe, and thus cannot be coherently defined without contradiction. And you correctly perceive that the universe runs, and you and I are portions of that process. So far so good.
A universe of made axioms makes sense, right? You can imagine a deterministic multiverse / random world-line. You can even create some simple universes.
Now answer me this:
Can you imagine creating a person within a consistent set of axioms? I’m not saying that you actually know how to construct a person of course—I’m just asking whether it is an intuitively, gut level truth that a set of axioms could in principle create conscious beings. I know that you already intellectually believe it possible—but do you alieve that it is possible?
Note that I’m not asking whether or not you personally could exist within this set of axioms and still have subjective experience....though we will get to that later. I’m just asking whether or not it’s intuitive that beings similar to you could exist within a set of axioms.
If it is intuitive to you that axioms can construct people, elaborate a little on the very basics outline of how this might be done.
If it’s not intuitive, let me know and we’ll focus on making it so.
I did not comment on 3 and 4 because I thought you wanted to judge first whether I understood the first two.
But does it explain why we assign souls to ourselves? How do you justify to yourself the fact that you can personally feel your thoughts, emotions, and sensory input?
To me, yes. I think that a theory of mind is ascribed to oneself first, then extends to other people. On a beginner level, developing a theory of mind toward yourself is easy because you get nearly instant feedback. Your proprioception as a child is defined by instant and (mostly) accurate feedback for everything within your local skin-border. After realizing that you have these experiences, and seeing other humans behave just as if they also have them, and being nearly compelled by our wetware to generalize it to other animals and objects, our “grouping objects” programs group these bundles of driving behaviors into an abstract object (which is visualized subconsciously as a concrete object) which we call a soul.
You didn’t define free will like I asked, but that’s okay—it indicates that you are implicitly using a definition of free will which is impossible in any logical universe, and thus cannot be coherently defined without contradiction...
That’s a much more coherent summary of what I meant, yes.
If it is intuitive to you that axioms can construct people, elaborate a little on the very basics outline of how this might be done.
You just said it—”A universe of made axioms makes sense, right?” My existence in a universe shows that it in fact has been done, saving me the trouble of proving how.
I enjoy your conversation, but I’m not particularly on the brink of an existential crisis here. In reference to my article I am simply admitting that I am aware that it is a limitation of the human brain to be guarded against, much like not sticking my hand on a hot stove prevents tissue damage. I don’t expect people to be immune from it, but we’d be better off if we were more conscious of it. Instead I brought on a flurry of angry retorts that amount to “Hey, I’m not subject to fallibility, just who do you think you are accusing me of being human?”
Haha, well it’s only worked on one person so far, I just figured it would be amusing to try. I meant for you to yourself judge if the first two questions were easy or hard and decide whether to do the second ones accordingly—sorry if that didn’t come across. I wasn’t sure where you were coming from philosophically, and maybe the first two questions were trivial to you, so I put some harder ones out there.
I’ll just explain what I was attempting to do here, since it seems you might be getting tired of the whole question—answer thing. In retrospect maybe that was dumb of me—I was mostly curious to see if someone else would independently arrive at my conclusions if asked the same questions, as a way to test the strength of my conclusions.
It’s easy to imagine, from an objective and detached viewpoint, a universe like our own in which people exist. It’s easy to reduce the universe into a set of axioms.
But none of that explains why you are in your body. It’s called the hard problem of consciousness. If a person is still confused about the hard problem, they will continue to instinctively believe in souls—and rightly so, because if one doesn’t see how physical matter could lead to qualia, it’s quite natural to think something unexplainable and mysterious is going on. Really, if you aren’t sure how matter gives rise to qualia, how would you know whether or not some precious link was severed if you, say, teleported by destroying and remaking all your molecules or something?
So what the previous questions have established is that you’re able to do step one (imagine the universe as a set of axioms). The next set was intended to establish step two (figure out how qualia works itself in this picture).
I was hoping if we got past step two, you would’t have B.I.A.S. any longer (although perhaps you’d be confused in an entirely different way). My implicit assumption here is that B.A.I.S. is ultimately a problem of not having dissolved the hard problem of consciousness, rather than an instinctive bias that all humans must have.
thoughts? Should I continue with the next set of questions? Should I just tell you what I think? Or have you already figured this out to your satisfaction? (If so I want to hear your conclusions)
I was mostly curious to see if someone else would independently arrive at my conclusions if asked the same questions, as a way to test the strength of my conclusions.
I’m not offended, that’s one of my favorite games. My thought process is so different than my peers that I constantly need to validate it through “coerced replication”. I know I’m on the right track when people clearly refuse to follow my train of thought because they squirm from self-reflection. Yesterday I got a person to vehemently deny that he dislikes his mother, while simultaneously giving “safer” evidence of that very conclusion, because, you know, you’re supposed to love your parents.
Regarding the hard problem of consciousness, I am not even sure that it’s a valid problem. The mechanics of sensory input are straightforward enough; The effects of association and learning and memory are at least conceptually understood, even if the exact mechanics might still be fuzzy; I don’t see what more is necessary. All normal-functioning humans pretty much run on the exact same OS, so naturally the experience will be nearly identical. I have a (probably untestable) theory that due to different nutritional requirements, a cat for example would experience the flavor of tuna almost identically to what we taste sugar as. And a cat eating sugar would taste something like eating plain flour, and catnip would be like smoking crack for humans. The experience itself between different creatures can be one of several stock experiences, brought on by different stimuli, just because we all share a similar biological plan (all animals with brains, for instance).
An experience like an orgasm could be classified to be something like, having a Level 455⁄293 release of relative seratonin and oxytocin levels, whereas eating chocolate causes a Level 231⁄118 in a specific person. If by some chance you measured the next person to have a Level 455⁄293 from eating chocolate, then you know that what they are experiencing is basically equivalent to an orgasm, without the mess. One human’s baseline experience of blue is likely to be very similar to another’s, but their individual experiences would modify it from that point. You know that they experience blue in much the same way that you know they have an anus. It’s a function of the hardware. In some rare cases you might be wrong, but there’s nothing mysterious about it to me.
Go ahead and tell me what your theories are, I’m sure that I’m not the only one listening. Even if we aren’t enlightening anybody, I’m sure we are amusing them.
Apologies for the late response. Grant proposals and exams.
I think the following series of posts really captures how I go about intuitively deconstructing the notion of “individual”.
EY discusses his confusion concerning the anthropic trilemma and I think his confusion is a result of implicit Belief In A Soul, and demonstrates many similarities to the problems you outlined in your post. KS tries to explain why this dissonance occurs here and I explain why dissonance need not necessarily occur here in the comments.
To summarize the relevant portions of this discussion, EY(2009) thinks that if you reject the notion that there is a “thread” connecting your past and future subjective experiences, human utility functions become incoherent. I attempt to intuitively demonstrate that this is not the case.
Hopefully people will weigh in on my comment over there, and I can see if it holds water.
As I read the “Anthropic Trilemma”, my response could be summed up thus:
“There is no spoon.”
So many of the basic arguments contained in it were defined by undefined concepts, if you dig deep enough. We talk about the continuation of consciousness in the same way that we talk about a rock or an apple. The only way that a sense of self doesn’t exist is the same way that a rock or apple don’t exist, in the strictest technical sense. To accept a human being as a classical object in the first place disqualifies a person from taking a quantum-mechanical cop-out when it comes to defining subjective experience. People here aren’t saying to themselves, “Huh? Where do you get this idea that a person exists for more than the present moment?? That’s crazy talk!” It’s just an attempt to deny the existence of a subjective experience that people actually do, um, subjectively experience.
Well, if you duplicate an apple (or even another person) there is never any confusion of which one is “real”. They are both identical duplicates.
However, when you talk about duplicating yourself, all these smart people are suddenly wondering which “self” they would subjectively experience being inside. And that’s pretty ridiculous.
So you need to point out that the self doesn’t really exist over time in the strictest technical sense, in order to make people stop wondering which identical copy of their subjective “self” will end up in.
These questions don’t make sense because In the same way that you can’t subjectively experience other people, you can’t subjectively experience yourself from the past or the future.
Those are both good, coherent answers. I’ll assume from your lack of comment on (3) and (4) that you currently find them difficult, so I’ll aim in that direction.
Here is the one point I want to expand upon a little. You said that the reason that you “feel” like you have free will is the following:
Now, that explains why we assign souls to other people and to clearly non-conscious things like rain and disease. It also explains much of the belief in God. But does it explain why we assign souls to ourselves? How do you justify to yourself the fact that you can personally feel your thoughts, emotions, and sensory input?
If you think you know the answer to this, or if you think that’s a silly question in the first place, elaborate. If you think it’s a reasonable and difficult question, or if you think the question is unanswerable, we’ll come back to it later.
Oh, also: I believe that this question is the puzzle that lies at the crux of the B.I.A.S. Please elaborate if you disagree with me on that.
---Anyway, moving on.
You didn’t define free will like I asked, but that’s okay—it indicates that you are implicitly using a definition of free will which is impossible in any logical universe, and thus cannot be coherently defined without contradiction. And you correctly perceive that the universe runs, and you and I are portions of that process. So far so good.
A universe of made axioms makes sense, right? You can imagine a deterministic multiverse / random world-line. You can even create some simple universes.
Now answer me this:
Can you imagine creating a person within a consistent set of axioms? I’m not saying that you actually know how to construct a person of course—I’m just asking whether it is an intuitively, gut level truth that a set of axioms could in principle create conscious beings. I know that you already intellectually believe it possible—but do you alieve that it is possible?
Note that I’m not asking whether or not you personally could exist within this set of axioms and still have subjective experience....though we will get to that later. I’m just asking whether or not it’s intuitive that beings similar to you could exist within a set of axioms.
If it is intuitive to you that axioms can construct people, elaborate a little on the very basics outline of how this might be done.
If it’s not intuitive, let me know and we’ll focus on making it so.
I did not comment on 3 and 4 because I thought you wanted to judge first whether I understood the first two.
To me, yes. I think that a theory of mind is ascribed to oneself first, then extends to other people. On a beginner level, developing a theory of mind toward yourself is easy because you get nearly instant feedback. Your proprioception as a child is defined by instant and (mostly) accurate feedback for everything within your local skin-border. After realizing that you have these experiences, and seeing other humans behave just as if they also have them, and being nearly compelled by our wetware to generalize it to other animals and objects, our “grouping objects” programs group these bundles of driving behaviors into an abstract object (which is visualized subconsciously as a concrete object) which we call a soul.
That’s a much more coherent summary of what I meant, yes.
You just said it—”A universe of made axioms makes sense, right?” My existence in a universe shows that it in fact has been done, saving me the trouble of proving how.
I enjoy your conversation, but I’m not particularly on the brink of an existential crisis here. In reference to my article I am simply admitting that I am aware that it is a limitation of the human brain to be guarded against, much like not sticking my hand on a hot stove prevents tissue damage. I don’t expect people to be immune from it, but we’d be better off if we were more conscious of it. Instead I brought on a flurry of angry retorts that amount to “Hey, I’m not subject to fallibility, just who do you think you are accusing me of being human?”
Haha, well it’s only worked on one person so far, I just figured it would be amusing to try. I meant for you to yourself judge if the first two questions were easy or hard and decide whether to do the second ones accordingly—sorry if that didn’t come across. I wasn’t sure where you were coming from philosophically, and maybe the first two questions were trivial to you, so I put some harder ones out there.
I’ll just explain what I was attempting to do here, since it seems you might be getting tired of the whole question—answer thing. In retrospect maybe that was dumb of me—I was mostly curious to see if someone else would independently arrive at my conclusions if asked the same questions, as a way to test the strength of my conclusions.
It’s easy to imagine, from an objective and detached viewpoint, a universe like our own in which people exist. It’s easy to reduce the universe into a set of axioms.
But none of that explains why you are in your body. It’s called the hard problem of consciousness. If a person is still confused about the hard problem, they will continue to instinctively believe in souls—and rightly so, because if one doesn’t see how physical matter could lead to qualia, it’s quite natural to think something unexplainable and mysterious is going on. Really, if you aren’t sure how matter gives rise to qualia, how would you know whether or not some precious link was severed if you, say, teleported by destroying and remaking all your molecules or something?
So what the previous questions have established is that you’re able to do step one (imagine the universe as a set of axioms). The next set was intended to establish step two (figure out how qualia works itself in this picture).
I was hoping if we got past step two, you would’t have B.I.A.S. any longer (although perhaps you’d be confused in an entirely different way). My implicit assumption here is that B.A.I.S. is ultimately a problem of not having dissolved the hard problem of consciousness, rather than an instinctive bias that all humans must have.
thoughts? Should I continue with the next set of questions? Should I just tell you what I think? Or have you already figured this out to your satisfaction? (If so I want to hear your conclusions)
I’m not offended, that’s one of my favorite games. My thought process is so different than my peers that I constantly need to validate it through “coerced replication”. I know I’m on the right track when people clearly refuse to follow my train of thought because they squirm from self-reflection. Yesterday I got a person to vehemently deny that he dislikes his mother, while simultaneously giving “safer” evidence of that very conclusion, because, you know, you’re supposed to love your parents.
Regarding the hard problem of consciousness, I am not even sure that it’s a valid problem. The mechanics of sensory input are straightforward enough; The effects of association and learning and memory are at least conceptually understood, even if the exact mechanics might still be fuzzy; I don’t see what more is necessary. All normal-functioning humans pretty much run on the exact same OS, so naturally the experience will be nearly identical. I have a (probably untestable) theory that due to different nutritional requirements, a cat for example would experience the flavor of tuna almost identically to what we taste sugar as. And a cat eating sugar would taste something like eating plain flour, and catnip would be like smoking crack for humans. The experience itself between different creatures can be one of several stock experiences, brought on by different stimuli, just because we all share a similar biological plan (all animals with brains, for instance).
An experience like an orgasm could be classified to be something like, having a Level 455⁄293 release of relative seratonin and oxytocin levels, whereas eating chocolate causes a Level 231⁄118 in a specific person. If by some chance you measured the next person to have a Level 455⁄293 from eating chocolate, then you know that what they are experiencing is basically equivalent to an orgasm, without the mess. One human’s baseline experience of blue is likely to be very similar to another’s, but their individual experiences would modify it from that point. You know that they experience blue in much the same way that you know they have an anus. It’s a function of the hardware. In some rare cases you might be wrong, but there’s nothing mysterious about it to me.
Go ahead and tell me what your theories are, I’m sure that I’m not the only one listening. Even if we aren’t enlightening anybody, I’m sure we are amusing them.
Apologies for the late response. Grant proposals and exams.
I think the following series of posts really captures how I go about intuitively deconstructing the notion of “individual”.
EY discusses his confusion concerning the anthropic trilemma and I think his confusion is a result of implicit Belief In A Soul, and demonstrates many similarities to the problems you outlined in your post. KS tries to explain why this dissonance occurs here and I explain why dissonance need not necessarily occur here in the comments.
To summarize the relevant portions of this discussion, EY(2009) thinks that if you reject the notion that there is a “thread” connecting your past and future subjective experiences, human utility functions become incoherent. I attempt to intuitively demonstrate that this is not the case.
Hopefully people will weigh in on my comment over there, and I can see if it holds water.
As I read the “Anthropic Trilemma”, my response could be summed up thus: “There is no spoon.”
So many of the basic arguments contained in it were defined by undefined concepts, if you dig deep enough. We talk about the continuation of consciousness in the same way that we talk about a rock or an apple. The only way that a sense of self doesn’t exist is the same way that a rock or apple don’t exist, in the strictest technical sense. To accept a human being as a classical object in the first place disqualifies a person from taking a quantum-mechanical cop-out when it comes to defining subjective experience. People here aren’t saying to themselves, “Huh? Where do you get this idea that a person exists for more than the present moment?? That’s crazy talk!” It’s just an attempt to deny the existence of a subjective experience that people actually do, um, subjectively experience.
Well, if you duplicate an apple (or even another person) there is never any confusion of which one is “real”. They are both identical duplicates.
However, when you talk about duplicating yourself, all these smart people are suddenly wondering which “self” they would subjectively experience being inside. And that’s pretty ridiculous.
So you need to point out that the self doesn’t really exist over time in the strictest technical sense, in order to make people stop wondering which identical copy of their subjective “self” will end up in.
These questions don’t make sense because In the same way that you can’t subjectively experience other people, you can’t subjectively experience yourself from the past or the future.
Well-said. Thank you.