Good decisions need to be based on correct beliefs as well as values.
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future “self”, is nothing supernatural i.e. no super-material unified continuous self of extra value: we don’t have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as ‘moving’ (not to say ‘alive’); and too well we know we could theoretically make you feel you’ve experienced your past as a continuous self while you were just nano-assembled a mirco-second ago with exactly the right memory inducing this beliefs/‘feeling’. So due to the absence of this extra “self”: “You” are simply this instant’s mind we currently observe from you. Now, crucially, this mind has, obviously, a certain regard, hopes, plans, for, in essence, what happens with your natural successor. In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.
In situations like those imagined with cloning thought experiments instead, it suddenly is less obvious from the outside, whom you’ll consider your most dearly cared for ‘natural’ (or now less obviously ‘natural’) successor. But as the only thing that in reality connects you with what we traditionally would have called “your future self”, is your own particular preferences/hopes/cares to that elected future mind, there is no objective rule to tell you from outside, which one you have to consider the relevant future mind. The relevant is the one you find relevant. This is very analogous for, say, when you’re in love, the one ‘relevant’ person in a room for you to save first in a fire (if you’re egoistic about you and your loved one) is the one you (your brain instinct, your hormones, or whatever) picked; you don’t have to ask anyone outside about whom that should be.
so if there is some fact of the matter that you don’t survive destructive teleportation, you shouldn’t go for it, irrespective of your values
The traditional notion of “survival” as in invoking a continuous integrated “self” over and above the succession of individual ephemeral minds with forward-looking preferences, must be put into perspective just as that of that long-term “self” itself indeed.
There’s a theory that personal identity is only ever instantaneous...an “observer. moment”… such that as an objective fact, you have no successors. I don’t know whether you believe it. If it’s true , you epistemically-should believe it, but you don’t seem to believe in epistemic norms.
There’s another, locally popular , theory that the continuity of personal identity is only about what you care about. (It either just is that, or it needs to be simplified to that...it’s not clear which). But it’s still irrational to care about things that aren’t real...you shouldn’t care about collecting unicorns...so if there is some fact of the matter that you don’t survive destructive teleportation, you shouldn’t go for it, irrespective of your values.
Thanks. I’d be keen to read more on this if you have links. I’ve wondered to which degree the relegation of the “self” I’m proposing (or that may have been proposed in a similar way in Rob Bensinger’s post and maybe before) is related to what we always hear about ‘no self’ from the more meditative crowd, though I’m not sure there’s a link there at all. But I’d be keen to read of people who have proposed theoretical things in a similar direction.
There’s a theory that personal identity is only ever instantaneous...an “observer. moment”… such that as an objective fact, you have no successors. I don’t know whether you believe it.
On the one hand, ‘No [third-party] objective successor’ makes sense. On the other hand: I’m still so strongly programmed to absolutely want to preserve my ‘natural’ [unobjective but engrained in my brain..] successors, that the lack of ‘outside-objective’ successor doesn’t impact me much.[1]
I think a simple analogy here, for which we can remain with the traditional view of self, is: Objectively, there’s no reason I should care about myself so much, or about my closed ones; my basic moral theory would ask me to be a bit less kind to myself and kinder to others, but given my wiring I just don’t manage to behave so perfectly.
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future “self”, is nothing supernatural
As before merely rejecting the supernatural doesn’t give you a single correct theory, mainly because it doesn’t give you a single theory. There a many more than two non-soul theories of personal identity (and the one Bensinger was assuming isn’t the one you are assuming).
e. no super-material unified continuous self of extra value:
That’s a flurry of claims. One of the alternatives to the momentary theory of personal identity is the theory that a person is a world-line, a 4D structure—and that’s a materialistic theory.
we don’t have any hint at such stuff;
Perhaps we have no evidence of something with all those properties, but we don’t need something with all those properties to supply one alternative. Bensinger ’s computationalism is also non magical (etc).
So due to the absence of this extra “self”: : “You” are simply this instant’s mind we currently observe from you.
Again, the theory of momentary identity isn’t right just because soul theory is wrong.
But as the only thing that in reality connects you with what we traditionally would have called “your future self”, is your own particular preferences/hopes/
No, since I have never been destructively transported, I am also connected by material continuity. You can hardly call that supernatural!
In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.
Great. So it isn’t all about my values. It’s possible for me to align my subjective sense of identity with objective data.
.
Core claim in my post is that the ‘instantaneous’ mind (with its preferences etc., see post) is—if we look closely and don’t forget to keep a healthy dose of skepticism about our intuitions about our own mind/self—sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or ‘simpler’ and therefore more compelling (Occam’s razor), as I meant to explain in the post.
As I make very clear in the post, nothing in this suggests other theories are impossible. For everything there can of course be (infinitely) many alternative theories available to explain it. I maintain the one I propose has a particular virtue of simplicity.
Regarding computationalism: I’m not sure whether you meant a very specific ‘flavor’ of computationalism in your comment; but for sure I did not mean to exclude computationalist explanations in general; in fact I’ve defended some strong computationalist position in the past and see what I propose here to be readily applicable to it.
the ‘instantaneous’ mind (with its preferences etc., see post) is*—if we look closely and don’t forget to keep a healthy dose of skepticism about our intuitions about our own mind/self*—sufficient to make sense of what we actually observe
Huh? If you mean my future observations, then you are assuming a future self, and therefore temporally extended self. If you mean my present observations, then they include memories of past observations.
in fact I’ve defended some strong computationalist position in the past
But a computation is an series of steps over time, so it is temporarily extended
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future “self”, is nothing supernatural i.e. no super-material unified continuous self of extra value: we don’t have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as ‘moving’ (not to say ‘alive’); and too well we know we could theoretically make you feel you’ve experienced your past as a continuous self while you were just nano-assembled a mirco-second ago with exactly the right memory inducing this beliefs/‘feeling’. So due to the absence of this extra “self”: “You” are simply this instant’s mind we currently observe from you. Now, crucially, this mind has, obviously, a certain regard, hopes, plans, for, in essence, what happens with your natural successor. In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.
In situations like those imagined with cloning thought experiments instead, it suddenly is less obvious from the outside, whom you’ll consider your most dearly cared for ‘natural’ (or now less obviously ‘natural’) successor. But as the only thing that in reality connects you with what we traditionally would have called “your future self”, is your own particular preferences/hopes/cares to that elected future mind, there is no objective rule to tell you from outside, which one you have to consider the relevant future mind. The relevant is the one you find relevant. This is very analogous for, say, when you’re in love, the one ‘relevant’ person in a room for you to save first in a fire (if you’re egoistic about you and your loved one) is the one you (your brain instinct, your hormones, or whatever) picked; you don’t have to ask anyone outside about whom that should be.
The traditional notion of “survival” as in invoking a continuous integrated “self” over and above the succession of individual ephemeral minds with forward-looking preferences, must be put into perspective just as that of that long-term “self” itself indeed.
Thanks. I’d be keen to read more on this if you have links. I’ve wondered to which degree the relegation of the “self” I’m proposing (or that may have been proposed in a similar way in Rob Bensinger’s post and maybe before) is related to what we always hear about ‘no self’ from the more meditative crowd, though I’m not sure there’s a link there at all. But I’d be keen to read of people who have proposed theoretical things in a similar direction.
On the one hand, ‘No [third-party] objective successor’ makes sense. On the other hand: I’m still so strongly programmed to absolutely want to preserve my ‘natural’ [unobjective but engrained in my brain..] successors, that the lack of ‘outside-objective’ successor doesn’t impact me much.[1]
I think a simple analogy here, for which we can remain with the traditional view of self, is: Objectively, there’s no reason I should care about myself so much, or about my closed ones; my basic moral theory would ask me to be a bit less kind to myself and kinder to others, but given my wiring I just don’t manage to behave so perfectly.
As before merely rejecting the supernatural doesn’t give you a single correct theory, mainly because it doesn’t give you a single theory. There a many more than two non-soul theories of personal identity (and the one Bensinger was assuming isn’t the one you are assuming).
That’s a flurry of claims. One of the alternatives to the momentary theory of personal identity is the theory that a person is a world-line, a 4D structure—and that’s a materialistic theory.
Perhaps we have no evidence of something with all those properties, but we don’t need something with all those properties to supply one alternative. Bensinger ’s computationalism is also non magical (etc).
Again, the theory of momentary identity isn’t right just because soul theory is wrong.
No, since I have never been destructively transported, I am also connected by material continuity. You can hardly call that supernatural!
Great. So it isn’t all about my values. It’s possible for me to align my subjective sense of identity with objective data. .
Core claim in my post is that the ‘instantaneous’ mind (with its preferences etc., see post) is—if we look closely and don’t forget to keep a healthy dose of skepticism about our intuitions about our own mind/self—sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or ‘simpler’ and therefore more compelling (Occam’s razor), as I meant to explain in the post.
As I make very clear in the post, nothing in this suggests other theories are impossible. For everything there can of course be (infinitely) many alternative theories available to explain it. I maintain the one I propose has a particular virtue of simplicity.
Regarding computationalism: I’m not sure whether you meant a very specific ‘flavor’ of computationalism in your comment; but for sure I did not mean to exclude computationalist explanations in general; in fact I’ve defended some strong computationalist position in the past and see what I propose here to be readily applicable to it.
Huh? If you mean my future observations, then you are assuming a future self, and therefore temporally extended self. If you mean my present observations, then they include memories of past observations.
But a computation is an series of steps over time, so it is temporarily extended