A sad fact is that good methods to elicit accurate probabilities of the outcome of some future process, e.g. who will win the next election, give you an incentive to influence that outcome, e.g. by campaigning and voting for the candidate you said was more likely to win. But with mind uploading and the ‘right’ theory of personal identity, we can fix this!
First, suppose that you think of all psychological descendants of your current self as ‘you’, but you don’t think of descendants of your past self as ‘you’. So, if you were to make a copy of yourself tomorrow, today you would think of your copy as fully you, but your ‘main self’ and the ‘copy’ would think of themselves as totally different people, and not care particularly much about the other one winning money.
Once you’re happy with that, here’s what you do: first, make your prediction about who wins the next election. Then, save your brain state right after making the prediction. Then wait for the election to happen, and after the result is known, instantiate a copy of you from that saved brain state, and reward or punish them according to how good the prediction was. At the time of making the prediction, you’re incentivized to be right so that your future self gets rewarded, but after the prediction is made but before the election, you think of the person who gets rewarded/punished as not you, and therefore don’t want to influence the election (any more than you already did).
(NB: this assumes that acausal trade isn’t a thing.)
Ok, a much simpler way is to put yourself in storage right after making the prediction and revive you after the event happens (e.g. by not having the copy of you that hangs out between the prediction and the event). Then you don’t need the weird theory of identity.
First, suppose that you think of all psychological descendants of your current self as ‘you’, but you don’t think of descendants of your past self as ‘you’.
I’m having trouble supposing this. Aren’t ALL descendants of my past selves “me”, including the me who is writing this comment? I’m good with differing degrees of “me-ness”, based on some edit-distance measure that hasn’t been formalized, but that’s not based on path, it’s based on similarity. My intuition is that it’s symmetrical.
A sad fact is that good methods to elicit accurate probabilities of the outcome of some future process, e.g. who will win the next election, give you an incentive to influence that outcome, e.g. by campaigning and voting for the candidate you said was more likely to win. But with mind uploading and the ‘right’ theory of personal identity, we can fix this!
First, suppose that you think of all psychological descendants of your current self as ‘you’, but you don’t think of descendants of your past self as ‘you’. So, if you were to make a copy of yourself tomorrow, today you would think of your copy as fully you, but your ‘main self’ and the ‘copy’ would think of themselves as totally different people, and not care particularly much about the other one winning money.
Once you’re happy with that, here’s what you do: first, make your prediction about who wins the next election. Then, save your brain state right after making the prediction. Then wait for the election to happen, and after the result is known, instantiate a copy of you from that saved brain state, and reward or punish them according to how good the prediction was. At the time of making the prediction, you’re incentivized to be right so that your future self gets rewarded, but after the prediction is made but before the election, you think of the person who gets rewarded/punished as not you, and therefore don’t want to influence the election (any more than you already did).
(NB: this assumes that acausal trade isn’t a thing.)
Objections might include:
That’s mindcrime and/or murder, which is bad.
Acausal trade is in fact a thing
blah blah technical feasibility
Why murder? No sims are being deleted in this proposal.
Ok, a much simpler way is to put yourself in storage right after making the prediction and revive you after the event happens (e.g. by not having the copy of you that hangs out between the prediction and the event). Then you don’t need the weird theory of identity.
I’m having trouble supposing this. Aren’t ALL descendants of my past selves “me”, including the me who is writing this comment? I’m good with differing degrees of “me-ness”, based on some edit-distance measure that hasn’t been formalized, but that’s not based on path, it’s based on similarity. My intuition is that it’s symmetrical.
I’m sympathetic to the idea this is a silly assumption, I just think it buys you a neat result.