I’m still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I’ve hammered these into good enough shape to include in the revival-request doc), here’s a few paragraphs I’ve been working on:
Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of “it was for their own good”. To that end, I am going to attempt a strategy called “precommitment”. Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission.
I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts—if there is clear evidence that doing so will reduce the odds of all sapience going permanently extinct—but given how easily people are able to fool themselves, if non-consensually altering someone’s mind is what is required to do that, then accepting responsibility for doing that, including whatever punishments result, would be a small price to pay, and so I am willing to accept such punishments even in this extreme case, in order to discourage the frivolous use of this justification.
While a rigid stance against non-consensual mind-alteration may be morally required in order to allow a peaceful society, there are also certain benefits to allow consensual mind-alteration, in certain cases. Most relevantly, it could be argued that scanning a brain and creating a software emulation of it could be counted as altering it, and it is obviously in my own self-interest to allow that as an option to help me be revived to resume pursuing my values. Thus, I am going to give my consent in advance to “alter” my mind to allow me to continue to exist, with the minimal amount of alteration possible, in two specific circumstances: 1) If such alterations are absolutely required to allow my mind to continue to exist at all, and 2) As part of my volunteering to be a subject for experimental mind-uploading procedures.
Well, if you don’t mind my tweaking your simple and absolute “unable” into something more like “unable, at least without suffering significant negative effects, such as a loss of wealth”, then I am aware of this, yes. Precommitment for something on this scale is a big step, and I’m taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me’s resources to fund the development of a version of me without the editing.
If you’re unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can “fund the development of a version of me without the editing”, don’t you just want to do it unconditionally?
I think we’re bumping up against some conflicting assumptions. At least at this stage of the drafting process, I’m focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn’t a neutral third-party around with a duty to try to act in my best interests… then, in such a future, I’m reasonably confident that it doesn’t matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self’s basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.
I’m still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I’ve hammered these into good enough shape to include in the revival-request doc), here’s a few paragraphs I’ve been working on:
Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of “it was for their own good”. To that end, I am going to attempt a strategy called “precommitment”. Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission.
I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts—if there is clear evidence that doing so will reduce the odds of all sapience going permanently extinct—but given how easily people are able to fool themselves, if non-consensually altering someone’s mind is what is required to do that, then accepting responsibility for doing that, including whatever punishments result, would be a small price to pay, and so I am willing to accept such punishments even in this extreme case, in order to discourage the frivolous use of this justification.
While a rigid stance against non-consensual mind-alteration may be morally required in order to allow a peaceful society, there are also certain benefits to allow consensual mind-alteration, in certain cases. Most relevantly, it could be argued that scanning a brain and creating a software emulation of it could be counted as altering it, and it is obviously in my own self-interest to allow that as an option to help me be revived to resume pursuing my values. Thus, I am going to give my consent in advance to “alter” my mind to allow me to continue to exist, with the minimal amount of alteration possible, in two specific circumstances: 1) If such alterations are absolutely required to allow my mind to continue to exist at all, and 2) As part of my volunteering to be a subject for experimental mind-uploading procedures.
And how are you going to do this? Precommitment is not a promise, it’s making it so that you are unable to choose in the future.
Well, if you don’t mind my tweaking your simple and absolute “unable” into something more like “unable, at least without suffering significant negative effects, such as a loss of wealth”, then I am aware of this, yes. Precommitment for something on this scale is a big step, and I’m taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me’s resources to fund the development of a version of me without the editing.
If you’re unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can “fund the development of a version of me without the editing”, don’t you just want to do it unconditionally?
I think we’re bumping up against some conflicting assumptions. At least at this stage of the drafting process, I’m focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn’t a neutral third-party around with a duty to try to act in my best interests… then, in such a future, I’m reasonably confident that it doesn’t matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self’s basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.