One of the few methods for detecting a large proportion of any program’s is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection.
That’s a claim often made (“With enough eyes all bugs are shallow”) but it’s not so clear-cut in practice. In real life a lot of open-source projects are very buggy and remain very buggy (and open to ’sploits) for a very long time. At the same time there is closed-source software which is considerably more bug-free (but very expensive) -- e.g. the code in fly-by-wire airplanes.
Besides, physical control, generally speaking, trumps all. If your mind is running on top of, say, open-source Ubuntu 179.5 Zooming Zazzle but I have access to your computing substrate, that is, the physical machine which runs the code, the fact that the machine runs an open-source OS is quite irrelevant. You’re looking for impossible guarantees.
And remember, that you are not making choices, but requests. You can’t “trust the motives” or not—if someone revives you with malicious intent, he can ignore your requests easily enough.
a lot of open-source projects are very buggy and remain very buggy
Yep.
there is closed-source software which is considerably more bug-free
Yep.
You’re looking for impossible guarantees.
I’m not looking for guarantees at all. (Put another way, I’m well aware that 0 and 1 are not probabilities.) What I am doing is trying to gauge the odds; and given my own real-world experience, open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software, to the extent that I’m willing to make an important choice based on whether or not a piece of software is open-source.
And remember, that you are not making choices, but requests.
True, as far as it goes. However, this document I’m writing is also something of a letter to anyone who is considering reviving me, and given how history goes, they are very likely going to have to take into account factors that I currently can’t even conceive of. Thus, I’m writing this doc in a fashion that not only lists my specific requests in regards to particular items, but also describes the reasoning behind the requests, so that the prospective reviver has a better chance of being able to extrapolate what my preferences about the unknown factors would likely be.
if someone revives you with malicious intent
If someone revives me with malicious intent, then all bets are off, and this document will nigh-certainly do me no good at all. So I’m focusing my attention on scenarios involving at least some measure of non-malicious intent.
open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software
On the basis of this “tends” you make a rather drastic request to NOT revive you if you’ll be running on top of some closed-source layer.
Not to mention that you’re assuming that “open-source” and “closed-source” concepts will still make sense in that high-tech future. As an example, let’s say I give you a trained neural net. It’s entirely open source, you can examine all the nodes, all the weights, all the code, everything. But I won’t tell you how I trained that NN. Are you going to trust it?
On the basis of this “tends” you make a rather drastic request to NOT revive you if you’ll be running on top of some closed-source layer.
That’s true. But given the various reasonably-possible scenarios I can think of, making this extreme of a request seems to be the only way to express the strength of my concern. I’ll admit it’s not a common worry; of course, this isn’t a common sort of document.
(If you want to know more about what leads me to this conclusion, you could do worse than to Google one of Cory Doctorow’s talks or essays on ‘the war on general-purpose computation’.)
As an example
You provide insufficient data about your scenario for me to make a decent reply. Which is why I included the general reasoning process leading to my requests about open- and closed-source—and in the latest version of the doc, have mentioned part of the reason for going into that detail is to let revivalists have some data to extrapolate what my choices would be in unknown scenarios. (In this particular case, the whole point of differentiating between open- and closed-source software is the factor of /trust/ - and in your scenario, you don’t give any information on how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted.)
I am well aware of the war on general computation, but I fail to see how it’s relevant here. If you are saying you don’t want to be alive in a world where this war has been lost, that’s… a rather strong statement.
To make an analogy, we’re slowly losing the ability to fix, modify, and, ultimately, control our own cars. I think that is highly unfortunate, but I’m unlikely to declare a full boycott of cars and go back to horses and buggy whips.
Since you’re basically talking about security, you might find it useful to start by specifying a threat model.
how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted
What do you mean by “such NNs”? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly—it’s too general for a meaningful answer.
In any case, the point is that the preference for open-source relies on it being useful, that is, the ability to gain helpful information from examining the code, and the ability to modify it to change its behaviour. You can examine a sufficiently complex trained NN all you want, but the information you’ll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.
Since you’re basically talking about security, you might find it useful to start by specifying a threat model.
I thought I had; it’s the part around the word ‘horrifying’.
What do you mean by “such NNs”? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly—it’s too general for a meaningful answer.
We actually already have a lot of the fundamental software required to run an “emulate brain X” program—stuff that accesses hardware, shuffles swap space around, arranges memory addresses, connects to networking, models a virtual landscape and avatars within, and so on. Some scientists have done extremely primitive emulations of neurons or neural clusters, so we’ve got at least an idea of what software is likely to need to be scaled up to run a full-blown human mind. None of this software has any particular need for neural-nets. I don’t know how such NNs as you propose would be necessary to emulate a brain; I don’t know what service they would add, how fundamental they would be, what sort of training data would be used, and so on.
Put another way, as best as I can interpret your question, it’s like saying “And what if future cars required an algae system?”, without even saying whether the algae tubing is connected to the fuel, or the exhaust, or the radiator, or the air conditioner. You’re right that NNs are general-purpose; that is, in fact, the issue I was trying to raise.
You can examine a sufficiently complex trained NN all you want, but the information you’ll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.
Alright. In this model, in which it appears that the training data is unavailable, that the existing NN can’t be retrained or otherwise modified, and that there doesn’t seem to be any mention of being able to train up a replacement NN with different behaviours, then it appears to match the relevant aspects of “closed-source” software much more closely than “open-source”, in that if a hostile exploiter finds a way to, say, leverage increased access and control of the computer through the NN, there is little-to-no chance of detecting or correcting the aspects of the NN’s behaviour which allow that. I’ll spend some time today seeing if I can rework the relevant paragraphs so that this conclusion can be more easily derived.
That’s not a threat model. A threat model is basically a list of adversaries and their capabilities. Typically, defensive measures help against some of them, but not all of them—a threat model helps you figure out the right trade-offs and estimate who you are (more or less) protected from, and who you are vulnerable to.
stuff that accesses hardware, shuffles swap space around, arranges memory addresses
That stuff usually goes by the name of “operating system”. Why do you think that brain emulations will run on top of something that’s closely related to contemporary operating systems?
a hostile exploiter
You seem to worry a lot about your brain emulation being hacked from the outside, but you don’t worry as much about what the rightful owner of the hardware and the software on top of which your em lives might do?
I’m merely a highly-interested amateur. Would you be willing to help me work out the details of such a model?
Why do you think that brain emulations will run on top of something that’s closely related to contemporary operating systems?
Because even as a scifi fan, I can only make so many guesses about alternatives, and it seems at least vaguely plausible that the same info-evolutionary pressures that led to the development of contemporary operating systems will continue to exist for at least the next couple of decades. At least, plausible enough that I should at least cover it as a possibility in the request-doc.
the rightful owner of the hardware
Without getting into the whole notion of property rights versus the right to revolution, if I thought whoever was planning to run a copy of me on a piece of hardware was fully trustworthy, why would I have included the ‘neutral third-third party’ clause?
You are writing a, basically, living will for a highly improbable situation. Conditional on that situation happening, I think that since you have no idea into which conditions you will wake up, it’s best to leave the decision to the future-you. Accordingly, the only thing I would ask for is the ability for your future-you to decide his fate (notably, including his right to suicide if he makes this choice).
In the latest draft, I’ve rewritten at least half from scratch, focusing on the reasons why I want to be revived in the first place, and thus under which circumstances reviving me would help those reasons.
future-you
The whole point about being worried about hostile entities taking advantage of vulnerabilities hidden in closed-source software is that future-me might be even less trustable to work towards my values than the future-self of a dieter can be trusted not to grab an Oreo if any are left in their home. Note to self: include the word ‘precommitment’ in version 0.2.1.
is that future-me might be even less trustable to work towards my values
If whoever revives you deliberately modifies you, you’re powerless to stop it. And if you’re worried that future-you will be different from past-you, well, that’s how life works. A future-you in five years will be different from current-you who is different from the past-you of five years ago.
As to precommitment, I don’t think you have any power to precommit, and I don’t think it’s a good idea either. Imagine if a seven-year-old past-you somehow found a way to precommit the current-you to eating a pound of candy a day, every day...
If whoever revives you deliberately modifies you, you’re powerless to stop it.
True, which is why I’m assuming a certain minimal amount of good-will on the part of whoever revives me. However, just because the reviver has control over the technology allowing my revival doesn’t mean they’re actually technically competent in matters of computer security—I’ve seen too many stories in /r/talesfromtechsupport of computer-company executives being utterly stupid in fundamental ways for that. The main threat I’m trying to hold off is, roughly, “good-natured reviver leaves the default password in my uploaded self’s router unchanged, script-kiddie running automated attacks on the whole internet gains access, script turns me into a sapient bitcoin-miner-equivalent for that hacker’s benefit”. That’s just one example of a large class of threats. No hostile intent by the reviver is required, just a manager-level understanding of computer security.
A future-you in five years will be different from current-you who is different from the past-you of five years ago.
Yes, I know. This is one reason that I am trying not to specify /what/ it is I value in the request-doc, other than 1) instrumental goals that are good for achieving many terminal goals, and 2) valuing my own life both as an instrumental and a terminal goal, which I confidently expect to remain as one of my fundamental values for quite some time to come.
I don’t think you have any power to precommit
I’ll admit that I’m still thinking on this one. Socially, precommitting is mainly useful as a deterrence, and I’m working out whether trying to precommit to work against anyone who modifies my mind without my consent, or any other variation of the tactic, would be worthwhile even if I /can/ follow through.
leaves the default password in my uploaded self’s router unchanged
Imagine a Faerie Queen popping into existence near you and saying: Yo, I have a favour to ask. See, a few centuries ago a guy wished to live in the far future, so I thought why not? it’s gonna be fun! and I put him into stasis. It’s time for him to wake up, but I’m busy so can you please reanimate him? Here is the scroll which will do it, it comes with instructions. Oh, and the guy wrote a lengthy letter before I froze him—he seemed to have been very concerned about his soul being tricked by the Devil—here it is. Cheers, love, I owe you one! …and she pops out of existence again.
You look at the letter (which the Faerie Queen helpfully translated into more or less modern English) and it’s full of details about consecrated ground, and wards against evil eyes, and witch barriers, and holy water, and what kind of magic is allowed anywhere near his body, and whatnot.
Language is a many-splendored thing. Even a simple shopping list contains more information than a mere list of goods; a full letter is exponentially more valuable. As one fictional character once put it, it’s worth looking for the “underneath the underneath”; as another one put it, it’s possible to deduce much of modern civilization from a cigarette butt. If you need a specific reason to pay attention to such a letter spelled out for you, then it could be looked at for clues as to how likely the reanimated fellow would need to spend time in an asylum before being deemed competent to handle his own affairs and released into modern society, or if it’s safe to plan on just letting him crash on my couch for a few days.
And that’s without even touching the minor detail that, if a Faerie Queen is running around, then the Devil may not be far behind her, and the resurrectee’s concerns may, in fact, be completely justified. :)
PS: I like this scenario on multiple levels. Is there any chance I could convince you to submit it to /r/WritingPrompts, or otherwise do more with it on a fictional level? ;)
It looks like you’ve changed the subject a bit—from whether the letter should be taken seriously in the sense of doing what it requests, to whether it should be taken seriously in the sense of reading it carefully.
Oh, I’m sure the letter is interesting, but the question is whether you will actually set up wards and have a supply of holy water on hand before activating the scroll. Though the observation that the existence of the Faerie Queen changes things is a fair point :-)
I don’t know if the scenario is all that exciting, it’s a pretty standard trope, a bit tarted-up. If you want to grab it and run with it, be my guest.
I’m still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I’ve hammered these into good enough shape to include in the revival-request doc), here’s a few paragraphs I’ve been working on:
Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of “it was for their own good”. To that end, I am going to attempt a strategy called “precommitment”. Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission.
I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts—if there is clear evidence that doing so will reduce the odds of all sapience going permanently extinct—but given how easily people are able to fool themselves, if non-consensually altering someone’s mind is what is required to do that, then accepting responsibility for doing that, including whatever punishments result, would be a small price to pay, and so I am willing to accept such punishments even in this extreme case, in order to discourage the frivolous use of this justification.
While a rigid stance against non-consensual mind-alteration may be morally required in order to allow a peaceful society, there are also certain benefits to allow consensual mind-alteration, in certain cases. Most relevantly, it could be argued that scanning a brain and creating a software emulation of it could be counted as altering it, and it is obviously in my own self-interest to allow that as an option to help me be revived to resume pursuing my values. Thus, I am going to give my consent in advance to “alter” my mind to allow me to continue to exist, with the minimal amount of alteration possible, in two specific circumstances: 1) If such alterations are absolutely required to allow my mind to continue to exist at all, and 2) As part of my volunteering to be a subject for experimental mind-uploading procedures.
Well, if you don’t mind my tweaking your simple and absolute “unable” into something more like “unable, at least without suffering significant negative effects, such as a loss of wealth”, then I am aware of this, yes. Precommitment for something on this scale is a big step, and I’m taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me’s resources to fund the development of a version of me without the editing.
If you’re unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can “fund the development of a version of me without the editing”, don’t you just want to do it unconditionally?
I think we’re bumping up against some conflicting assumptions. At least at this stage of the drafting process, I’m focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn’t a neutral third-party around with a duty to try to act in my best interests… then, in such a future, I’m reasonably confident that it doesn’t matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self’s basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.
That’s a claim often made (“With enough eyes all bugs are shallow”) but it’s not so clear-cut in practice. In real life a lot of open-source projects are very buggy and remain very buggy (and open to ’sploits) for a very long time. At the same time there is closed-source software which is considerably more bug-free (but very expensive) -- e.g. the code in fly-by-wire airplanes.
Besides, physical control, generally speaking, trumps all. If your mind is running on top of, say, open-source Ubuntu 179.5 Zooming Zazzle but I have access to your computing substrate, that is, the physical machine which runs the code, the fact that the machine runs an open-source OS is quite irrelevant. You’re looking for impossible guarantees.
And remember, that you are not making choices, but requests. You can’t “trust the motives” or not—if someone revives you with malicious intent, he can ignore your requests easily enough.
Yep.
Yep.
I’m not looking for guarantees at all. (Put another way, I’m well aware that 0 and 1 are not probabilities.) What I am doing is trying to gauge the odds; and given my own real-world experience, open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software, to the extent that I’m willing to make an important choice based on whether or not a piece of software is open-source.
True, as far as it goes. However, this document I’m writing is also something of a letter to anyone who is considering reviving me, and given how history goes, they are very likely going to have to take into account factors that I currently can’t even conceive of. Thus, I’m writing this doc in a fashion that not only lists my specific requests in regards to particular items, but also describes the reasoning behind the requests, so that the prospective reviver has a better chance of being able to extrapolate what my preferences about the unknown factors would likely be.
If someone revives me with malicious intent, then all bets are off, and this document will nigh-certainly do me no good at all. So I’m focusing my attention on scenarios involving at least some measure of non-malicious intent.
On the basis of this “tends” you make a rather drastic request to NOT revive you if you’ll be running on top of some closed-source layer.
Not to mention that you’re assuming that “open-source” and “closed-source” concepts will still make sense in that high-tech future. As an example, let’s say I give you a trained neural net. It’s entirely open source, you can examine all the nodes, all the weights, all the code, everything. But I won’t tell you how I trained that NN. Are you going to trust it?
That’s true. But given the various reasonably-possible scenarios I can think of, making this extreme of a request seems to be the only way to express the strength of my concern. I’ll admit it’s not a common worry; of course, this isn’t a common sort of document.
(If you want to know more about what leads me to this conclusion, you could do worse than to Google one of Cory Doctorow’s talks or essays on ‘the war on general-purpose computation’.)
You provide insufficient data about your scenario for me to make a decent reply. Which is why I included the general reasoning process leading to my requests about open- and closed-source—and in the latest version of the doc, have mentioned part of the reason for going into that detail is to let revivalists have some data to extrapolate what my choices would be in unknown scenarios. (In this particular case, the whole point of differentiating between open- and closed-source software is the factor of /trust/ - and in your scenario, you don’t give any information on how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted.)
I am well aware of the war on general computation, but I fail to see how it’s relevant here. If you are saying you don’t want to be alive in a world where this war has been lost, that’s… a rather strong statement.
To make an analogy, we’re slowly losing the ability to fix, modify, and, ultimately, control our own cars. I think that is highly unfortunate, but I’m unlikely to declare a full boycott of cars and go back to horses and buggy whips.
Since you’re basically talking about security, you might find it useful to start by specifying a threat model.
What do you mean by “such NNs”? Neural nets are basically general-purpose models and your question is similar to asking how trustworthy computers have been at performing their intended functions properly—it’s too general for a meaningful answer.
In any case, the point is that the preference for open-source relies on it being useful, that is, the ability to gain helpful information from examining the code, and the ability to modify it to change its behaviour. You can examine a sufficiently complex trained NN all you want, but the information you’ll gain from this examination is very limited and your ability to modify it is practically non-existent. It is effectively a black box even if you can peer at all the individual components and their interconnects.
I thought I had; it’s the part around the word ‘horrifying’.
We actually already have a lot of the fundamental software required to run an “emulate brain X” program—stuff that accesses hardware, shuffles swap space around, arranges memory addresses, connects to networking, models a virtual landscape and avatars within, and so on. Some scientists have done extremely primitive emulations of neurons or neural clusters, so we’ve got at least an idea of what software is likely to need to be scaled up to run a full-blown human mind. None of this software has any particular need for neural-nets. I don’t know how such NNs as you propose would be necessary to emulate a brain; I don’t know what service they would add, how fundamental they would be, what sort of training data would be used, and so on.
Put another way, as best as I can interpret your question, it’s like saying “And what if future cars required an algae system?”, without even saying whether the algae tubing is connected to the fuel, or the exhaust, or the radiator, or the air conditioner. You’re right that NNs are general-purpose; that is, in fact, the issue I was trying to raise.
Alright. In this model, in which it appears that the training data is unavailable, that the existing NN can’t be retrained or otherwise modified, and that there doesn’t seem to be any mention of being able to train up a replacement NN with different behaviours, then it appears to match the relevant aspects of “closed-source” software much more closely than “open-source”, in that if a hostile exploiter finds a way to, say, leverage increased access and control of the computer through the NN, there is little-to-no chance of detecting or correcting the aspects of the NN’s behaviour which allow that. I’ll spend some time today seeing if I can rework the relevant paragraphs so that this conclusion can be more easily derived.
That’s not a threat model. A threat model is basically a list of adversaries and their capabilities. Typically, defensive measures help against some of them, but not all of them—a threat model helps you figure out the right trade-offs and estimate who you are (more or less) protected from, and who you are vulnerable to.
That stuff usually goes by the name of “operating system”. Why do you think that brain emulations will run on top of something that’s closely related to contemporary operating systems?
You seem to worry a lot about your brain emulation being hacked from the outside, but you don’t worry as much about what the rightful owner of the hardware and the software on top of which your em lives might do?
I’m merely a highly-interested amateur. Would you be willing to help me work out the details of such a model?
Because even as a scifi fan, I can only make so many guesses about alternatives, and it seems at least vaguely plausible that the same info-evolutionary pressures that led to the development of contemporary operating systems will continue to exist for at least the next couple of decades. At least, plausible enough that I should at least cover it as a possibility in the request-doc.
Without getting into the whole notion of property rights versus the right to revolution, if I thought whoever was planning to run a copy of me on a piece of hardware was fully trustworthy, why would I have included the ‘neutral third-third party’ clause?
You are writing a, basically, living will for a highly improbable situation. Conditional on that situation happening, I think that since you have no idea into which conditions you will wake up, it’s best to leave the decision to the future-you. Accordingly, the only thing I would ask for is the ability for your future-you to decide his fate (notably, including his right to suicide if he makes this choice).
In the latest draft, I’ve rewritten at least half from scratch, focusing on the reasons why I want to be revived in the first place, and thus under which circumstances reviving me would help those reasons.
The whole point about being worried about hostile entities taking advantage of vulnerabilities hidden in closed-source software is that future-me might be even less trustable to work towards my values than the future-self of a dieter can be trusted not to grab an Oreo if any are left in their home. Note to self: include the word ‘precommitment’ in version 0.2.1.
If whoever revives you deliberately modifies you, you’re powerless to stop it. And if you’re worried that future-you will be different from past-you, well, that’s how life works. A future-you in five years will be different from current-you who is different from the past-you of five years ago.
As to precommitment, I don’t think you have any power to precommit, and I don’t think it’s a good idea either. Imagine if a seven-year-old past-you somehow found a way to precommit the current-you to eating a pound of candy a day, every day...
True, which is why I’m assuming a certain minimal amount of good-will on the part of whoever revives me. However, just because the reviver has control over the technology allowing my revival doesn’t mean they’re actually technically competent in matters of computer security—I’ve seen too many stories in /r/talesfromtechsupport of computer-company executives being utterly stupid in fundamental ways for that. The main threat I’m trying to hold off is, roughly, “good-natured reviver leaves the default password in my uploaded self’s router unchanged, script-kiddie running automated attacks on the whole internet gains access, script turns me into a sapient bitcoin-miner-equivalent for that hacker’s benefit”. That’s just one example of a large class of threats. No hostile intent by the reviver is required, just a manager-level understanding of computer security.
Yes, I know. This is one reason that I am trying not to specify /what/ it is I value in the request-doc, other than 1) instrumental goals that are good for achieving many terminal goals, and 2) valuing my own life both as an instrumental and a terminal goal, which I confidently expect to remain as one of my fundamental values for quite some time to come.
I’ll admit that I’m still thinking on this one. Socially, precommitting is mainly useful as a deterrence, and I’m working out whether trying to precommit to work against anyone who modifies my mind without my consent, or any other variation of the tactic, would be worthwhile even if I /can/ follow through.
Imagine a Faerie Queen popping into existence near you and saying: Yo, I have a favour to ask. See, a few centuries ago a guy wished to live in the far future, so I thought why not? it’s gonna be fun! and I put him into stasis. It’s time for him to wake up, but I’m busy so can you please reanimate him? Here is the scroll which will do it, it comes with instructions. Oh, and the guy wrote a lengthy letter before I froze him—he seemed to have been very concerned about his soul being tricked by the Devil—here it is. Cheers, love, I owe you one! …and she pops out of existence again.
You look at the letter (which the Faerie Queen helpfully translated into more or less modern English) and it’s full of details about consecrated ground, and wards against evil eyes, and witch barriers, and holy water, and what kind of magic is allowed anywhere near his body, and whatnot.
How seriously are going to take this letter?
Language is a many-splendored thing. Even a simple shopping list contains more information than a mere list of goods; a full letter is exponentially more valuable. As one fictional character once put it, it’s worth looking for the “underneath the underneath”; as another one put it, it’s possible to deduce much of modern civilization from a cigarette butt. If you need a specific reason to pay attention to such a letter spelled out for you, then it could be looked at for clues as to how likely the reanimated fellow would need to spend time in an asylum before being deemed competent to handle his own affairs and released into modern society, or if it’s safe to plan on just letting him crash on my couch for a few days.
And that’s without even touching the minor detail that, if a Faerie Queen is running around, then the Devil may not be far behind her, and the resurrectee’s concerns may, in fact, be completely justified. :)
PS: I like this scenario on multiple levels. Is there any chance I could convince you to submit it to /r/WritingPrompts, or otherwise do more with it on a fictional level? ;)
It looks like you’ve changed the subject a bit—from whether the letter should be taken seriously in the sense of doing what it requests, to whether it should be taken seriously in the sense of reading it carefully.
Why can’t we have both?
Oh, I’m sure the letter is interesting, but the question is whether you will actually set up wards and have a supply of holy water on hand before activating the scroll. Though the observation that the existence of the Faerie Queen changes things is a fair point :-)
I don’t know if the scenario is all that exciting, it’s a pretty standard trope, a bit tarted-up. If you want to grab it and run with it, be my guest.
I’m still working out various aspects, details, and suchlike, but so you can at least see what direction my thoughts are going (before I’ve hammered these into good enough shape to include in the revival-request doc), here’s a few paragraphs I’ve been working on:
Sometimes, people will, with the best of intentions, perform acts that turn out to be morally reprehensible. As one historical example in my home country, with the stated justification of improving their lives, a number of First Nations children were sent to residential schools where the efforts to eliminate their culture ranged from corporal punishment for speaking the wrong language to instilling lessons that led the children to believe that Indians were worthless. While there is little I, as an individual, can do to make up for those actions, I can at least try to learn from them, to try to reduce the odds of more tragedies being done with the claim of “it was for their own good”. To that end, I am going to attempt a strategy called “precommitment”. Specifically, I am going to do two things: I am going to precommit to work against the interests of anyone who alters my mind without my consent, even if, after the alteration, I agree with it; and I am going to give my consent in advance to certain sharply-limited alterations, in much the way that a doctor can be given permission to do things to a body that would be criminal without that permission.
I value future states of the universe in which I am pursuing things I value more than I value futures in which I pursue other things. I do not want my mind to be altered in ways that would change what I value, and the least hypocritical way to do that is to discourage all forms of non-consensual mind-alteration. I am willing to agree, that I, myself, should be subject to such forms of discouragement, if I were to attempt such an act. I have been able to think of one, single moral justification for such acts—if there is clear evidence that doing so will reduce the odds of all sapience going permanently extinct—but given how easily people are able to fool themselves, if non-consensually altering someone’s mind is what is required to do that, then accepting responsibility for doing that, including whatever punishments result, would be a small price to pay, and so I am willing to accept such punishments even in this extreme case, in order to discourage the frivolous use of this justification.
While a rigid stance against non-consensual mind-alteration may be morally required in order to allow a peaceful society, there are also certain benefits to allow consensual mind-alteration, in certain cases. Most relevantly, it could be argued that scanning a brain and creating a software emulation of it could be counted as altering it, and it is obviously in my own self-interest to allow that as an option to help me be revived to resume pursuing my values. Thus, I am going to give my consent in advance to “alter” my mind to allow me to continue to exist, with the minimal amount of alteration possible, in two specific circumstances: 1) If such alterations are absolutely required to allow my mind to continue to exist at all, and 2) As part of my volunteering to be a subject for experimental mind-uploading procedures.
And how are you going to do this? Precommitment is not a promise, it’s making it so that you are unable to choose in the future.
Well, if you don’t mind my tweaking your simple and absolute “unable” into something more like “unable, at least without suffering significant negative effects, such as a loss of wealth”, then I am aware of this, yes. Precommitment for something on this scale is a big step, and I’m taking a bit of time to think the idea over, so that I can become reasonably confident that I want to precommit in the first place. If I do decide to do so, then one of the simpler options could be to, say, pre-authorize whatever third-party agents have been nominated to act in my interests and/or on my behalf to use some portion of edited-me’s resources to fund the development of a version of me without the editing.
If you’re unable to protect yourself from being edited, what makes you think your authorizations will have any force or that you will have any resources? And if you actually can “fund the development of a version of me without the editing”, don’t you just want to do it unconditionally?
I think we’re bumping up against some conflicting assumptions. At least at this stage of the drafting process, I’m focusing on scenarios where at least some of the population of the future has at least some reason to pay at least minimal attention to whatever requests I make in the letter. If things are so bad that someone is going to take my frozen brain and use it to create an edited version of my mind without my consent, and there isn’t a neutral third-party around with a duty to try to act in my best interests… then, in such a future, I’m reasonably confident that it doesn’t matter what I put in this request-doc, so I might as well focus my writing on other futures, such as ones in which a neutral third-party advocate might be persuadable to set up a legal instrument funneling some portion of my edited-self’s basic-guaranteed-income towards keeping a copy of the original brain-scan safely archived until a non-edited version of myself can be created from it.