Isn’t dissolving the concept of personal identity relatively straightforward?
We know that we’ve evolved to protect ourselves and further our own interests, because organisms who didn’t have that as a goal didn’t fare very well. So in this case at least, personal identity is merely a desire to make sure that “this” organism survives.
Naturally, the problem is defining in “this organism”. One says, “this” organism is something defined by physical continuity. Another says, “this” organism is something defined by the degree of similarity to some prototype of this organism.
One says, sound is acoustic vibrations. Another says, sound is the sensation of hearing...
There’s no “real” answer to the question “what is personal identity”, any more than there is a “real” answer to the question “what is sound”. You may pick any definition you prefer. Of course, truly dissolving “personal identity” isn’t as easy as dissolving “sound”, because we are essentially hard-wired to anticipate that there is such a thing as personal identity, and to have urges for protecting it. We may realize on an intellectual level that “personal identity” is just a choice of words, but still feel that there should be something more to it, some “true” fact of the matter.
But there isn’t. There are just various information-processing systems with different degrees of similarity to each other. One may draw more-or-less arbitrary borders between the systems, designating some as “me” and some as “not-me”, but that’s a distinction in the map, not in the territory.
Of course, if you have goals about the world, then it makes sense to care about the information-processing systems that are similar to you and share those goals. So if I want to improve the world, it makes sense for me to care about “my own” (in the commonsense meaning of the word) well-being—even though future instances of “me” are actually distinct systems from the information-processing system that is typing these words, I should still care about their well-being because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out. But that doesn’t mean that I should necessarily consider them “me”, or that the word would have any particular meaning.
And naturally, on some anticipation/urge-level I still consider those entities “me”, and have strong emotions regarding their well-being and survival, emotions that go above and beyond that which is justified merely in the light of my goals . But I don’t consider that as something that I should necessarily endorse, except to the extent that such anticipations are useful instrumentally. (E.g. status-seeking thoughts and fantasies may make “me” achieve things which I would not otherwise achieve, even though they make assumptions about such a thing as personal identity.)
So if I want to improve the world, it makes sense for me to care about “my own” … well-being—even though future instances of “me” are actually distinct systems … because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out.
I think it’s clear that there is also terminal value in caring about the well-being of “me”. As with most other human psychological drives, it acts as a sloppily optimized algorithm of some instrumental value, but while its purpose could be achieved more efficiently by other means, the particular way it happens to be implemented contributes an aspect of human values that is important in itself, in a way that’s unrelated to the evolutionary purpose that gave rise to the psychological drive, or to instrumental value of its present implementation.
It’s not clear to me. To get us to behave selfishly, evolution could have instilled false aliefs to the effect that other people’s mental processes aren’t as real as ours, in which case we may want to just disregard those. Even if there’s no such issue, there’s not necessarily any simple one-to-one mapping from urges to components of reflected preference, especially when the urges seem to involve concepts like “me” that are hard to extend beyond a low-tech human context. (If I recall correctly, on previous occasions when you’ve made this argument, you were thinking of “me” in terms of similarity in person-space, which is not as hard to make sense out of as the threads of experience being discussed in this thread.)
I don’t personally endorse it as a terminal value, but it’s everyone’s own decision whether to endorse it or not.
I don’t believe it is, at least it’s relatively easy to decide incorrectly, so the fact of having (provisionally) decided doesn’t answer the question of what the correct decision is. “It’s everyone’s own decision” or “everyone is entitled to their own beliefs” sounds like very bad epistemology.
I cited what seems to me like a strong theoretical argument for antipredicting terminal indifference to personal well-being. Your current conclusion being contrary to what this argument endorses doesn’t seem to address the argument itself.
I thought that your previous comment was simply saying that
1) in deciding whether or not we should value the survival of a “me”, the evolutionary background of this value is irrelevant 2) the reason why people value the survival of a “me” is unrelated to the instrumental benefits of the goal
I agree with those claims, but don’t see them as being contrary to my decision not to personally endorse such a value. You seem to be saying that the question of whether or not a “me” should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn’t be terminal values.
You seem to be saying that the question of whether or not a “me” should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn’t be terminal values.
Wait—what? Are you partly defining terminal values via their being unaffected by epistemic considerations? This makes me want to ask a lot of questions for which I would otherwise take answers for granted. Like: are there any terminal values? Can a person choose terminal values? Do choices express values that were antecedent to the choice? Can a person have “knowledge” or some closely related goal as a personal terminal value?
(Interestingly, seditious values deathist that I am, I am not inclined to believe in “terminal value” of ecologically-contingent approximations of actual morality (i.e., the actually justified decision policy, i.e., God); but God seems to care about those hasty approximations on their own terms, and so I end up caring, by transitivity, about me qua me and individuals qua individuals. So the humanist and the theist end up in the same non-Buddhist place. Ave meta!)
Such comments remind me of Time Cube with a dash of sanity, if only you would strip out the nonsense words (about 90 percent of the content) and clearly define everything that’s left.
I think it’s clearly not nonsense, but deserves to be downvoted anyway for casually assuming weird god stuff that he hasn’t really properly explained anywhere. Still interesting to the W_N connoiseur, and the parentheses are a mitigating factor.
Isn’t dissolving the concept of personal identity relatively straightforward?
Nay, I don’t think it is.
I don’t take issue with anything in particular you said in this comment, but it doesn’t feel like a classic, non-greedy Reduction of the style used to reduce free will into cognitive algorithms or causality into math.
The sense in which you can create another entity arbitrarily like yourself and say, “I identify with this creature based on so-and-so definition” and then have different experiences than the golem no matter how like you it is is the confused concept that I do not think has been dissolved; I am not sure if it a non-fake dissolving of it has ever even started. (Example: Susan Blackmore’s recent “She Won’t Be Me”. This is clearly a fake reduction; you don’t get to escape the difficulties of personal identity confusion by saying a new self pops up every few minutes/seconds/plancktimes. Your comment is less obviously wrong but still sidesteps the confusion instead of Solving it.)
Hell, it’s not just a Confusing Problem. I’d say it’s a good candidate for The Most Confusing Problem.
Edit (one of many little ones): I made this comment pretty poorly, but I hope the point both makes sense and got through relatively intact. Mitchell Porter’s comment is also really good until the penultimate paragraph.
The sense in which you can create another entity arbitrarily like yourself and say, “I identify with this creature based on so-and-so definition” and then have different experiences than the golem no matter how like you it is is the confused concept that I do not think has been dissolved; I am not sure if it a non-fake dissolving of it has ever even started.
I tried responding to this example, but I find the whole example so foreign and confused in some sense that I don’t even know how to make enough sense of it to offer a critique or an explanation. Why wouldn’t you expect there to exist an entity with different experiences than the golem, and which remembers having identified with the golem? You’re not killing it, after all.
And naturally, on some anticipation/urge-level I still consider those entities “me”, and have strong emotions regarding their well-being and survival, emotions that go above and beyond that which is justified merely in the light of my goals . But I don’t consider that as something that I should necessarily endorse, except to the extent that such anticipations are useful instrumentally.
I don’t consider them something I should necessarily endorse. I consider them something that I do actually endorse because all things considered I want to.
(Although given that the endorsement of such emotional considerations thereby makes them parts of my goals I suppose we could declare technically and tautologically that any emotions that go above and beyond what that which is incorporated into my goals should not be endorsed, depending on the definition of a few of the terms.)
Interesting. I was thinking about same regarding the early stages of AGI—it is difficult to define the ‘me’ precisely, and its unclear why would one have a really precise definition of ‘me’ in an early AGI. It’s good enough if the life is ‘me’ to AI but the Jupiter isn’t ‘me’ , that’s a negligible loss in utility from the life not being kosher food for the AI.
Isn’t dissolving the concept of personal identity relatively straightforward?
We know that we’ve evolved to protect ourselves and further our own interests, because organisms who didn’t have that as a goal didn’t fare very well. So in this case at least, personal identity is merely a desire to make sure that “this” organism survives.
Naturally, the problem is defining in “this organism”. One says, “this” organism is something defined by physical continuity. Another says, “this” organism is something defined by the degree of similarity to some prototype of this organism.
One says, sound is acoustic vibrations. Another says, sound is the sensation of hearing...
There’s no “real” answer to the question “what is personal identity”, any more than there is a “real” answer to the question “what is sound”. You may pick any definition you prefer. Of course, truly dissolving “personal identity” isn’t as easy as dissolving “sound”, because we are essentially hard-wired to anticipate that there is such a thing as personal identity, and to have urges for protecting it. We may realize on an intellectual level that “personal identity” is just a choice of words, but still feel that there should be something more to it, some “true” fact of the matter.
But there isn’t. There are just various information-processing systems with different degrees of similarity to each other. One may draw more-or-less arbitrary borders between the systems, designating some as “me” and some as “not-me”, but that’s a distinction in the map, not in the territory.
Of course, if you have goals about the world, then it makes sense to care about the information-processing systems that are similar to you and share those goals. So if I want to improve the world, it makes sense for me to care about “my own” (in the commonsense meaning of the word) well-being—even though future instances of “me” are actually distinct systems from the information-processing system that is typing these words, I should still care about their well-being because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out. But that doesn’t mean that I should necessarily consider them “me”, or that the word would have any particular meaning.
And naturally, on some anticipation/urge-level I still consider those entities “me”, and have strong emotions regarding their well-being and survival, emotions that go above and beyond that which is justified merely in the light of my goals . But I don’t consider that as something that I should necessarily endorse, except to the extent that such anticipations are useful instrumentally. (E.g. status-seeking thoughts and fantasies may make “me” achieve things which I would not otherwise achieve, even though they make assumptions about such a thing as personal identity.)
I think it’s clear that there is also terminal value in caring about the well-being of “me”. As with most other human psychological drives, it acts as a sloppily optimized algorithm of some instrumental value, but while its purpose could be achieved more efficiently by other means, the particular way it happens to be implemented contributes an aspect of human values that is important in itself, in a way that’s unrelated to the evolutionary purpose that gave rise to the psychological drive, or to instrumental value of its present implementation.
(Relevant posts: Evolutionary Psychology, Thou Art Godshatter, In Praise of Boredom.)
It’s not clear to me. To get us to behave selfishly, evolution could have instilled false aliefs to the effect that other people’s mental processes aren’t as real as ours, in which case we may want to just disregard those. Even if there’s no such issue, there’s not necessarily any simple one-to-one mapping from urges to components of reflected preference, especially when the urges seem to involve concepts like “me” that are hard to extend beyond a low-tech human context. (If I recall correctly, on previous occasions when you’ve made this argument, you were thinking of “me” in terms of similarity in person-space, which is not as hard to make sense out of as the threads of experience being discussed in this thread.)
Fair enough. I don’t personally endorse it as a terminal value, but it’s everyone’s own decision whether to endorse it or not.
I don’t believe it is, at least it’s relatively easy to decide incorrectly, so the fact of having (provisionally) decided doesn’t answer the question of what the correct decision is. “It’s everyone’s own decision” or “everyone is entitled to their own beliefs” sounds like very bad epistemology.
I cited what seems to me like a strong theoretical argument for antipredicting terminal indifference to personal well-being. Your current conclusion being contrary to what this argument endorses doesn’t seem to address the argument itself.
I thought that your previous comment was simply saying that
1) in deciding whether or not we should value the survival of a “me”, the evolutionary background of this value is irrelevant
2) the reason why people value the survival of a “me” is unrelated to the instrumental benefits of the goal
I agree with those claims, but don’t see them as being contrary to my decision not to personally endorse such a value. You seem to be saying that the question of whether or not a “me” should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn’t be terminal values.
Wait—what? Are you partly defining terminal values via their being unaffected by epistemic considerations? This makes me want to ask a lot of questions for which I would otherwise take answers for granted. Like: are there any terminal values? Can a person choose terminal values? Do choices express values that were antecedent to the choice? Can a person have “knowledge” or some closely related goal as a personal terminal value?
(Interestingly, seditious values deathist that I am, I am not inclined to believe in “terminal value” of ecologically-contingent approximations of actual morality (i.e., the actually justified decision policy, i.e., God); but God seems to care about those hasty approximations on their own terms, and so I end up caring, by transitivity, about me qua me and individuals qua individuals. So the humanist and the theist end up in the same non-Buddhist place. Ave meta!)
Such comments remind me of Time Cube with a dash of sanity, if only you would strip out the nonsense words (about 90 percent of the content) and clearly define everything that’s left.
I think it’s clearly not nonsense, but deserves to be downvoted anyway for casually assuming weird god stuff that he hasn’t really properly explained anywhere. Still interesting to the W_N connoiseur, and the parentheses are a mitigating factor.
Nay, I don’t think it is.
I don’t take issue with anything in particular you said in this comment, but it doesn’t feel like a classic, non-greedy Reduction of the style used to reduce free will into cognitive algorithms or causality into math.
The sense in which you can create another entity arbitrarily like yourself and say, “I identify with this creature based on so-and-so definition” and then have different experiences than the golem no matter how like you it is is the confused concept that I do not think has been dissolved; I am not sure if it a non-fake dissolving of it has ever even started. (Example: Susan Blackmore’s recent “She Won’t Be Me”. This is clearly a fake reduction; you don’t get to escape the difficulties of personal identity confusion by saying a new self pops up every few minutes/seconds/plancktimes. Your comment is less obviously wrong but still sidesteps the confusion instead of Solving it.)
Hell, it’s not just a Confusing Problem. I’d say it’s a good candidate for The Most Confusing Problem.
Edit (one of many little ones): I made this comment pretty poorly, but I hope the point both makes sense and got through relatively intact. Mitchell Porter’s comment is also really good until the penultimate paragraph.
I tried responding to this example, but I find the whole example so foreign and confused in some sense that I don’t even know how to make enough sense of it to offer a critique or an explanation. Why wouldn’t you expect there to exist an entity with different experiences than the golem, and which remembers having identified with the golem? You’re not killing it, after all.
I don’t consider them something I should necessarily endorse. I consider them something that I do actually endorse because all things considered I want to.
(Although given that the endorsement of such emotional considerations thereby makes them parts of my goals I suppose we could declare technically and tautologically that any emotions that go above and beyond what that which is incorporated into my goals should not be endorsed, depending on the definition of a few of the terms.)
Interesting. I was thinking about same regarding the early stages of AGI—it is difficult to define the ‘me’ precisely, and its unclear why would one have a really precise definition of ‘me’ in an early AGI. It’s good enough if the life is ‘me’ to AI but the Jupiter isn’t ‘me’ , that’s a negligible loss in utility from the life not being kosher food for the AI.