I can afford cryonics, but I think I wouldn’t want to vitrify children for the same reasons you are criticizing parents for having children. If it is ethical to bring children into the world only if you can care for them, protect them and provide for them, how could it be ethical to send a helpless, dependent child to an indeterminate future? We can make a decision to have a child in the present with lots of relevant information about the present. Sending a child to the future might be negligent.
I would like to imagine a post-cryonic life for my child that is positive.
However, what if it isn’t positive? What if my child thinks I abandoned her, as she is exploited or abused or neglected? Better to know that she experienced a few happy years, and accept that that is all there is, then risk a horrible future she can’t get away from.
If there was one person I trusted that she would be in the custody of, it would make a difference. If she was old enough to reason on her own, and know the difference between right and wrong, it would make a difference. She’s just so helpless. I shouldn’t send her there without someone who loves her, but I can’t guarantee that someone who loves her would be there.
Yes, of course. My husband would sign up too, and the grandparents, and aunts and uncles and grown siblings and their descendants. However, in this future beyond my control, they may not have any meaningful custody or be woken up at all.
I might offer that what I am imagining most vividly is a splintered, trans-humanist society that might value small human children but not the things that human children need to be happy.
So what you’re concerned about is that if your entire family signed up, they might wake up your child but not any of her relatives, or wake all of you up and then not let you actually take care of her?
I should add that I don’t think my husband and I think cryonics is “creepy”. We would sign up, whatever that means.* And if my kids want to sign up when they’re old enough to make that decision, then I would let them sign up. It’s just not something I feel comfortable doing to a small child; sending them someplace I haven’t been and can’t imagine.
.* I think the “would” means that so far it sounds OK, but we realize we haven’t worked through all the angles and anticipate some oscillations in our POV.
It’s just not something I feel comfortable doing to a small child; sending them someplace I haven’t been and can’t imagine.
If your children were about to leave for a strange country without you—or for that matter with you, to some place that none of you had ever been—would you, in your pity, shoot them?
WHAT IS WRONG WITH YOU PEOPLE? WHY IS YOUR BRAIN NOT PROCESSING THIS? IT’S YOUR KIDS’ FUCKING LIVES NOT A FAIRY TALE YOU’RE WRITING. You don’t get to be uncomfortable with the fairy tale and so refuse to write it. All you can do is kill your kids. That’s it. That’s all refusal means.
The visceral reaction to “kill your kids” comes from imagining that you’re actually killing them, not letting them go about a normal life. You can argue that it comes down to the same thing, but if they were really the same thing, you could use the less emotionally-loaded language.
What you’re saying: What kind of terrible parent lets their kids live a life slightly better than they had?
Mere framing, depending simply on what your brain thinks is normal. Visit a convention of cryonicists and talk to the kids signed up for cryonics. Those parents wouldn’t think very highly of themselves if they didn’t pay to sign up their kids. If their children died and were lost, they would hold themselves at fault. They’re right.
(The obvious metaphor—so obvious, in fact, that it is not even a metaphor—is withholding lifesaving medical care. Consider how we feel about parents who refuse to treat their kid’s cancer, for example.)
Exactly. Or “What kind of parent settles for letting their kids have merely a slightly better life than they had when a dramatically better life might be possible?”
The world is largely a pretty normal place. I’ve lived in Africa and Europe and have spent time in Central America and almost every type of place in the United States. I feel like I could begin to assess the risk to some extent.
What do I know about a future with alien minds? I thought it was you who argued that we can’t possibly know their motives and values.
(Take the horrible/awfulness of me wanting to kill my kids and project that onto the future society that might revive them. If it’s in me, why can’t it be in them?)
Your children are standing in front of the boat. You can send them on the boat. You can go with them on the boat. Or you can cut their throats. That’s it. There’s nothing else.
I hand you the knife.
What do you do?
I think I’m starting to understand what the absence of clicking is. People who click process problems as if they’re in the real world. If they wouldn’t cut their child’s throat, then they sign their kid up for cryonics.
People who don’t click don’t process the problem like it’s the real world. Strange reactions rise up in them, fears of the unknown, fears of the known, and they react to these fears by running away within the landscape of their minds, and somewhere on the outside words come out of their lips like “But who knows what will happen? How can I send my kids into that?” It’s an expression of that inner fear, an expression of that running away, words coming out of the lips that match up to what’s going on inside their heads somehow… the dread of losing control, the feeling of not understanding, the horror of thinking about mortality, all of these are expressed in a flinch away from the uncomfortable thought and put stumblingly into words.
So they kill their children, because they aren’t processing a real world, they’re processing words connected to words, ways of flinching and running away and giving vent to those odd internal feelings.
And the clickers are standing in front of that boat.
Yes, I’m not a “clicker”. I realize this wasn’t addressed to me, but about me, but I don’t see how this should make me feel ashamed or even inadequate. I need to make ethical/moral decisions and I have no choice but to think through them on my own and make my own decision. When I was 16, I was certain that Proof by Induction would not work, and ever since I understood that it did work, I’ve never claimed certainty based on intuition. However, some arrogance remains in that if something doesn’t convince me, I think: why should I be convinced, if I’m not convinced? I haven’t had any feedback from life that my ability to make decisions isn’t working. I have some problems, but they don’t seem related in any way to not clicking. (Well, maybe I need to “click” on you guys just being too culturally different from me.)
I wonder if in response to your hypothetical you expect a reasonable me to suddenly realize, “oh no! I would never kill them!” and thus find the contradiction in my far-mode reasoning about cryonics. But I would. (Filling in drastic and dire reasons for why the children were being taken on a boat against my will.) So would you, I think, slip a deadly but painless pill to a young boy about to be tortured and killed in a religious ceremony if you were certain it was going to happen. Perhaps you were trying to identify an ethical failing: that at one probability of risk I “let them” live, but at a higher level I arbitrarily, cruelly kill them. I don’t think even this is correct; I don’t know where to begin to know how to reason where the ‘killing’ probability would be, and don’t claim that I do. I only know that it would be an agonizing thing for a parent to ever have to decide, but one they can’t escape from just by glibly pretending such scenarios cannot happen, if the scenario does happen.
I submit that I’m an open-minded and curious person that isn’t afraid of new ideas. (I might be afraid of a lion, but I’m not afraid of thinking about lions.) One problem that I seem to have – though I actually like it – is that I tend to forget what my reasoning on any topic is after a while, and I’m more or less a blank slate again. If I have a negative view of cryonics, when I never even heard of it outside of LW, I think it is because I found some inconsistency in your own world view about it.
For example, it hadn’t really occurred to me at first that ‘somebody strange’ might revive my daughter. My concerns were “near-concerns” – how in the world would I ever get an ambulance in time, much less get her frozen in time, in this backwater place I live in where they aren’t even competent enough to insert a child catheter correctly? But then I read several times this suspiciously repetitive chant that ‘they’re not worried’ about negative-value futures because being revived would select for positive futures.
Well, that’s clearly not dependable optimism. We might get revived just because they want to cut down on energy costs in Arizona, and keeping 20 million people frozen takes a lot of power. Maybe they have a penchant for realistic theater and want to simulate the Holocaust with real non-genetically modified humans.
In my mind, previous to hearing the chant, was that all of these scenarios were unlikely because the world is normal. Obama and byrnema and Joe 6-pack and maybe Eliezer have children, and then their children have children, and then the children of these revive us and we live in a world that is essentially the same or somewhat better. But when I process people talking about the set of possible futures like it’s actually really large enough to include all kinds of horrors with non-negligible probability, then unwarranted optimism in the direction of the probability of something I or they know nothing about does not comfort me.
That is the outcome of the group applying epistemic hygiene to only arguments that lead to conclusions they disagree with. The bad arguments for the views they agree with, left untouched, will sway a person like me who does not think in a linear way, but organically assimilates assumptions and hypotheses as I encounter them.
Your description of not-clicking sounds functionally similar to what Amanda Baggs calls ‘widgets’, though she uses the term in a more political than personal context.
It looks to me like you have the choice between running a small risk of your daughter thinking you abandoned her (to a scary future that won’t leave you in a satisfactory family unit)… or running a slightly larger risk of actually abandoning her (to the gaping maw of death). The ideal is that she gets to be 18 without dying and then decides she wants to sign up, of course (and you and other relatives are still alive and ready to join her with stacks of paperwork at the ready), but we’re talking about managing risks, here, not the best case.
I hope you don’t mind the clarification, but I think you’ve underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I’m not too concerned about the satisfactory family unit, as long as my daughter is psychologically healthy.)
This compared to death, which is terrible for reasons other than “death”. Terrible because I will miss her and because of all the relationships disconnected and because her potential living this life won’t be fulfilled—nothing that cryonics will give back.
It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you/someone write something to help me relate to that?
I hope you don’t mind the clarification, but I think you’ve underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I’m not too concerned about a satisfactory family unit, as long as my daughter is psychologically healthy.)
I realize this is probably weird coming from me, considering my own cryonics hangup, but we’re already assuming they won’t revive anyone they can’t render passably physically healthy—I think they’d make some effort to take the same precautions regarding psychological health. My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory; generic needs for care and affection in a small child are so obvious I would be astounded if the future didn’t have an arrangement in place before they revived any frozen children.
It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you write something to help me relate to that?
I’ll try, but I’m not sure exactly what you mean by “the stream of consciousness” or “independent of relationships”. I value me (my software), I value you (your software), I prefer that these softwares be executed in pleasant environments rather than sitting around statically—but then, I’d probably cease to value my software in an awful hurry if it had no relationships with other software, and I’d respect a preference on your part to end your own software execution if that seemed to be your real and reasoned desire.
Why do I have these values? Well, people are just so darned special, that’s all I can say.
My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory
No it’s not. It’s just scary.
generic needs for care and affection in a small child are so obvious
You really, really think that this, on the one hand, is “obvious”, but on the other hand, a superintelligence is going to look inside your head and go, “Huh, I just can’t figure that out.”
YOU ARE A SMALL CHILD. We all are. I know that, why can’t everyone see it?
I’m going to outright ignore you on this one. I have been met with incredulity, not mere curiosity (“Can you tell us more about the experiences you’ve had that let you model this extreme need?”), let alone commiseration (“wow, me too! let’s make friends and sign up together and solve each other’s problems!”) when I have described this need here. This tells me that what I have going on is really weird and nobody here has accurately modeled it. I do not think you can make predictions about this characteristic of mine when you are still so confused about it. A FAI probably could. You aren’t one. And since I know more about the phenomenon than you, I’m going to trust my predictions about what the FAI would say on inspecting my brain over yours. I think it’d say “wow, she would not hold up well without any loved ones nearby for longer than a few hours, unless I messed with her in ways she would not approve.”
YOU ARE A SMALL CHILD. We all are. I know that, why can’t everyone see it?
You’re raving. Perhaps you are deficient in a vitamin or mineral.
I am not incredulous that you want friends! I am incredulous that you think not even a superintelligence could get them for you! This has nothing to do with you and your needs and your private inner life and everything to do with superintelligence! It wouldn’t even have to do anything creepy! Human beings are simply not that complicated!
Why not? There are likely problems we think are impossible that a superintelligence will be able to solve. But there are also likely problems we think impossible which turn out to actually be impossible.
I am very confident that an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone’s friend in a few hours. How often does this kind of thing happen by accident in kindergarten?
Impossibility should be reserved for things like FTL and reversal of entropy, not straightforward problems of human interaction.
That’s a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don’t you value your life? Why are you so willing to assume that super intelligence can’t think of any better solutions than you can?
In principle, I’m willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)
Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.
I think that people should be created by other persons who are motivated, at least in part, by an expectation to intrinsically value the person so created. If a FAI created a person for the express purpose of being my friend, it would presumably expect to value the person intrinsically, but that wouldn’t be its motivation in creating the person; its motivation in creating the person would have to do with valuing me. And if it modified its motivations to avoid annoying me in this way before it created the person, that would probably have other consequences on its actions that I wouldn’t care for, like motivating it to go around creating lots of persons left and right because people are just so darned intrinsically valuable and more are needed.
I’m sorry, but I’m going to have to call bollocks on this. Jesus Christ, don’t you want to live? Why aren’t you currently opting for euthanasia on the risk you end up friendless tomorrow?
Why aren’t you currently opting for euthanasia on the risk you end up friendless tomorrow?
Well, I probably won’t end up friendless tomorrow; and most of the mechanisms by which that could happen would not prohibit me from “opting for euthanasia”.
You probably won’t end up friendless in the event of a recovery from cryo storage. There is no reason you couldn’t chose to opt for euthanasia then either.
If we modify the case so the FAI isn’t autonomously creating the person, but rather waking me up and quizzing me on what I want em to be like, a) I really doubt I could do that in a timely fashion, and b) I think the creepiness might prevent me from wanting to do it at all.
Would it be less creepy if the FAI found an existing person, out of the billions available, with whom you would be very likely to make friends in a few hours?
That would be fine, and the possibility has already been covered (it was described, I think, as “super-Facebook”) but I wouldn’t bet on it. Frankly, I’m not even sure I’m comfortable with the level of mind-reading the AI would have to do to implement any of these finer-tuned solutions. I like my mental privacy.
You prefer that the hardware inside your head, with its known (and unknown) limitations compute your utility function, as opposed to internal to the aforementioned omniscient being? Why?
Utility functions are actually an extreme of consequentialism; they state that your actions should not just be based on consequences, but a weighted probability distribution over outcomes.
Hmm… I think Eliezer might have overstated his case a little (for the lay audience). If you take a utility function to be normative with respect to your actions, it’s not merely descriptive of your preferences, for some meanings of “preference”—not including, I would think, the definition Eliezer would use.
Using more ordinary language, a Kantian might have preferences about the outcomes of his actions, but doesn’t think such preferences are the primary concern in what one ought to do.
Using more ordinary language, a Kantian might have preferences about the outcomes of his actions, but doesn’t think such preferences are the primary concern in what one ought to do.
Oh. Well, that’s not a distinction that seems terribly important to me. I’m happy to talk about “preferences” as being (necessarily) causally related to one’s actions.
Utility functions describe your preferences. Their existence doesn’t presuppose consequentialism, I don’t think.
There are a few things meant by “consequentialism”. It can be as general as “outcomes/consequences are what’s important when making decisions” to as specific as “Mill’s Utilitarianism”. The term was only coined mid-20th century and it’s not-very-technical jargon, so it hasn’t quite settled yet. I’m pretty sure the use here is more on the general side.
Other theories about what’s important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.
Other theories about what’s important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.
Why not, if they’re about preferences?
My understanding is that a utility function is nothing but a scaled preference ordering, and I interpret ethical debates as being disputes about what one’s preferences—i.e. one’s utility function—ought to be.
For example (to oversimplify and caricature): the “consequentialist” might argue that one should be willing to torture one person to save 1000 from certain death, while the “deontologist” argues that one should not because Torture is Wrong. Both sides of this argument are asserting preferences about the state of the world: the “consequentialist” assigns higher utility to the situation in which 1000 people are alive and you’re guilty of torture, and the “deontologist” assigns higher utility to the situation in which the 1000 have perished but your hands are clean.
This is called the “consequentialist doppelganger” phenomenon, when I’ve heard it described, and it’s very, very annoying to non-consequentialists. Yes, you can turn any ethical system into a consequentialism by applying the following transformation:
What would the world be like if everyone followed Non-Consequentialism X?
You should act to achieve the outcome yielded by Step 1.
But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.
But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
Mainly, I just wanted to point out that when whoever-it-was above mentioned “your utility function”, you probably should have interpreted that as “your preferences”.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
There should be a “Deontology for Consequentialists” post, if there isn’t already.
Actually, it was exactly the problems with this formulation that I was talking about in the pub with LessWrongers on Saturday. Consequentialism isn’t about maximizing anything; that’s a deontologist’s way of looking at it. Consequentialism says that if action A has a Y better outcome than action B, then action A is better than action B by Y. It follows that the best action is the one with the best outcome, but there isn’t some bright crown on the best action compared to which all other actions are dull and tarnished; other actions are worse to exactly the extent to which they bring about worse consequences, that’s all.
I don’t think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.
Strictly, no. Virtue ethics is self-regarding that way. But it isn’t like virtue ethics says you shouldn’t care about other people’s virtue. It just isn’t calculated at that level of the theory. Helping other people be virtuous is the compassionate and generous thing to do.
I don’t think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.
Such a person is sometimes called a “Mad Bodhisattva”.
Certainly a way I’ve framed it in the past (and it sounds perfectly in line with the Confucian conception of virtue ethics) but I don’t think it’s quite right. At the very least, it’s worth mentioning that a lot of virtue ethicists don’t believe a theory of right action is appropriately part of virtue ethics.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
Not to butt in but “x is morally obligatory” is a perfectly good reason to do any x. That is the case where x is exhibiting some virtue, following some rule or maximizing some end.
You may run into problems trying to create a utility function for some forms of deontology, at least if you’re mapping into the real numbers. For instance, some deontologists would say that killing a person has infinite negative utility which can’t be cancelled out by any number of positive utility outcomes.
That wouldn’t be mapping into the real numbers, of course, since infinity isn’t a real number.
As I understand it, utility functions are supposed to be equivalence classes of mappings into the real numbers, where two such mappings are said to be equivalent if they are related by a (positive) affine transformation (x → ax + b where a>0).
A strictly monotonic transformation will preserve your preference ordering of states but not your preference ordering for actions to achieve those states. That is, only affine transformations preserve the ordering of expected values of different actions.
Right, which is why I was saying that some ethical theories can’t be expressed by a utility function. And there could be many such incomparable qualities: even adding in infinity and negative infinity may not be enough (though the transfinite ordinals, or the surreal numbers, might be).
I’m surprised at that +b, because that doesn’t preserve utility ratios.
Right, which is why I was saying that some ethical theories can’t be expressed by a utility function.
Ah, I see. But I’m still not actually sure that’s true, though...see below.
I’m surprised at that +b, because that doesn’t preserve utility ratios.
Indeed not; utilities are measured on an interval scale, not a ratio scale. There’s no “absolute zero”. (I believe Eliezer made a youthful mistake along these lines, IIRC.) This expresses the fact that utility functions are just (scaled) preference orderings.
You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences. This is a contradiction. Why are you currently endorsing stupid “terminal” values?
You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences.
Wait, are you suggesting that I be uploaded into something with really excellent computational power so I myself would become a superintelligence? As opposed to an external agent that happened to be superintelligent? That might actually work. I will have to think about that. You could have been less rude in proposing it, though.
No. I am suggesting that the situation I described is what you would find in an FAI. You really should be deferring to Eliezer’s expertise in this case.
What about my statements was rude? How can I present these arguments without making you feel uncomfortable?
No. I am suggesting that the situation I described is what you would find in an FAI.
Then I don’t understand what you said.
You really should be deferring to Eliezer’s expertise in this case.
I will not do that as long as he seems confused about the psychology he’s trying to predict things for.
What about my statements was rude? How can I present these arguments without making you feel uncomfortable?
I think calling my terminal values “stupid” was probably the most egregious bit. It is wise to avoid that word as applied to people and things they care about. I would appreciate it if people who want to help me would react with curiosity, not screeching incredulity and metaphorical tearing out of hair, when they find my statements about myself or other things puzzling or apparently inconsistent.
If he and I are confused, you are seriously failing to describe your situation. You are a human brain. Brains work by physical laws. Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.
I understand your antipathy for the word stupid. I shall try to avoid it in the future.
If he and I are confused, you are seriously failing to describe your situation.
Yes, this is very likely. I don’t think I ever claimed that the problem wasn’t in how I was explaining myself; but a fact about my explanation isn’t a fact about the (poorly) explained phenomenon.
Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.
I can figure out how to fix the issues I have too: I’m in the process of befriending some more cryonics-friendly people. Why do people think this isn’t going to work? Or does it just seem like a bad way to approach the problem for some reason? Or do people think I won’t follow through on signing up should I acquire a suitable friend, even though I’ve offered to bet money on my being signed up within two years barring immense financial disaster?
Your second paragraph clears up my lingering misunderstandings; that was the missing piece of information for me. We were (or at least I was) arguing about a hypothetical situation instead of the actual situation. What you’re doing sounds perfectly reasonable to me.
A “user-friendly” way to do this would be for the FAI to send an avatar/proxy to act as a guide when you wake up. Explain how things work, introduce you to others who you might enjoy the company off, answer any question you might have, help you get set up in a way that works for you, help you locate people who you know that might be alive, etc.
A FAI would know better than we do what we find creepy/uncomfortable/etc, and would probably avoid it as much as possible.
Nope. The best thing it could do would be retrieve my dead friends and family. But if we’re talking about whether I should sign up for cryonics, I’m assuming that’s the only way somebody gets to be not dead after having died a while ago. If we have an AI that’s so brilliant that it can reconstruct people accurately just by looking at the causal history of the universe and extrapolating backwards, I’m safe whether I sign up or not! And if we have one that can’t, I think I’m only safe if I am signed up with at least one loved one.
The best thing it could do would be retrieve my dead friends and family.
Out of curiosity—how accurate would the retrieval need to be? For instance, suppose the FAI accessed your memories and reconstructed your friends based on the information found there, extrapolating the bits you didn’t know. Obviously they wouldn’t be the same people, since the FAI had to make up a lot of stuff neither you nor it knew. But since the main model was a fit to your memories, they’d still seem just like your friends to you. Would you find that acceptable?
My initial reaction is that I would really hate this. It’s one of the things that makes me really uneasy about extreme “neural archaeology”-style cryonics: I want an actual reconstruction, not just a plausible one.
You can think of no scenarios between those two that would entice you to sign up?
Nope. You’re welcome to try, though, if you value my life and don’t want to try the “befriend me while signed up or on track to become so” route via which several wonderful people are helping.
My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory
No it’s not. It’s just scary.
Am I parsing this correctly? You’re intending to say that Alicorn isn’t really experiencing what she’s reporting that she is, but is instead just making it up to avoid acknowledging a fear of cryonics?
That’s fairly obviously wrong: If Alicorn really was scared of cryonics, the easiest thing for her to do would be to ignore the discussions, not try to solve her stated problem.
It’s also pretty offensive for you to keep suggesting that. Do you really think you’re in a better position to know about her than she’s in to know about herself? You’re implying a severe lack of insight on her part when you say things like that.
I am not suggesting that Alicorn is anything other than what she thinks she is.
But when she suggests that she has psychological problems a superintelligence can’t solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.
There simply isn’t anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, “a superintelligence couldn’t understand or handle my problems!” You get to say that to your friends, your sister, your mother, and certainly to me, but you don’t get to shout it at a superintelligence because that is silly.
Human brains just don’t have that kind of complicated in them.
I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.
I see at least one plausible case where an AI couldn’t solve the problem: All it takes is for none of Alicorn’s friends to be cryopreserved and for it to require significantly more than 5 hours for her brain to naturally perform the neurological changes involved in going from considering someone a stranger to considering them a friend. (I’m assuming that she’d consider speeding up that process to be an unacceptable brain modification. ETA: And that being asked if a particular solution would be acceptable is a significant part of making that solution acceptable, such that suggested solutions would not be acceptable if they hadn’t already been suggested. (This is true for me, but may not be similarly true for Alicorn.))
Your desire isn’t the problem. Maybe it was poorly phrased; “psychological challenge” or “psychological task for superintelligence to perform” or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.
It’s just a phrase. If someone isn’t being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.
This argument might have already gone on too long, but I’m going to try stating as what I see as your main objection to see if I actually understand your true objection.
You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you’ll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won’t want to accept any type of brain modification or enhancement that would make you not miserable. If you’re sufficiently miserable, it’s likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.
You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value.
Insert “without my conscious, deliberate, informed consent, and ideally agency”.
You think you’ll probably be miserable in the future
Replace “you’ll probably” with “you are reasonably likely to”.
and you find it hard to believe that the FAI will find you a friend comparable to your current friends.
Add “with whom I could become sufficiently close within a brief and critical time period”.
You won’t want to accept any type of brain modification or enhancement that would make you not miserable.
See first adjustment. n.b.: without my already having been modified, the “informed” part would probably take longer than the brief, critical time period.
If you’re sufficiently miserable, it’s likely than a FAI could change you without your consent
Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.
and you prefer death to the chance of that happening.
Medical grade nanobots capable of rendering people immortal exist. They’re a one time injection that protect you from all disease forever. Do you and your family accept the treatment? If so, you’re essentially guaranteeing your family will survive until the singularity, at which point a malevolent singleton might take over the universe and do all sorts of nasty things to you.
I agree that cryonics is scarier than the hypothetical, but the issue at hand isn’t actually different.
Children are only helpless for about 10 years. If the singleton came within 10 years of my child being born without warning, it would be awful but not my fault. If I had any warning of it coming, and I still chose to have children that then came to harm, it would be my fault.
Good question. The reason is because that this has recently become an ethical problem for me rather than an optimization problem. Perhaps that is why I think of it in far mode, if that is what I’m doing. But I do know that in ethical mode, it can be the case that you’re no longer allowed to base a decision on the computed “average value” … even small risks or compromises might be unacceptable. If I allow my child to come to harm, and I’m not allowed to do that, then it doesn’t matter what advantage I’m gambling for. I perceive at a certain age they can make their own decision, and then with relief I may sign them up for cryonics at their request.
First, I doubt that an future which would revive my child would be any worse than today. Second, my position is that cryonics can ameliorate the creation of a child, not obviate the inherent problems. I would ask you to read all of the replies about the preferably of cryo over dying—If it’s good enough for me, then it’s good enough for my child.
I can afford cryonics, but I think I wouldn’t want to vitrify children for the same reasons you are criticizing parents for having children. If it is ethical to bring children into the world only if you can care for them, protect them and provide for them, how could it be ethical to send a helpless, dependent child to an indeterminate future? We can make a decision to have a child in the present with lots of relevant information about the present. Sending a child to the future might be negligent.
Are they better off dead?
Yeah, maybe.
I would like to imagine a post-cryonic life for my child that is positive.
However, what if it isn’t positive? What if my child thinks I abandoned her, as she is exploited or abused or neglected? Better to know that she experienced a few happy years, and accept that that is all there is, then risk a horrible future she can’t get away from.
If there was one person I trusted that she would be in the custody of, it would make a difference. If she was old enough to reason on her own, and know the difference between right and wrong, it would make a difference. She’s just so helpless. I shouldn’t send her there without someone who loves her, but I can’t guarantee that someone who loves her would be there.
Can’t you sign yourself up too, and go with her?
Yes, of course. My husband would sign up too, and the grandparents, and aunts and uncles and grown siblings and their descendants. However, in this future beyond my control, they may not have any meaningful custody or be woken up at all.
I might offer that what I am imagining most vividly is a splintered, trans-humanist society that might value small human children but not the things that human children need to be happy.
So what you’re concerned about is that if your entire family signed up, they might wake up your child but not any of her relatives, or wake all of you up and then not let you actually take care of her?
Yes.
I should add that I don’t think my husband and I think cryonics is “creepy”. We would sign up, whatever that means.* And if my kids want to sign up when they’re old enough to make that decision, then I would let them sign up. It’s just not something I feel comfortable doing to a small child; sending them someplace I haven’t been and can’t imagine.
.* I think the “would” means that so far it sounds OK, but we realize we haven’t worked through all the angles and anticipate some oscillations in our POV.
If your children were about to leave for a strange country without you—or for that matter with you, to some place that none of you had ever been—would you, in your pity, shoot them?
WHAT IS WRONG WITH YOU PEOPLE? WHY IS YOUR BRAIN NOT PROCESSING THIS? IT’S YOUR KIDS’ FUCKING LIVES NOT A FAIRY TALE YOU’RE WRITING. You don’t get to be uncomfortable with the fairy tale and so refuse to write it. All you can do is kill your kids. That’s it. That’s all refusal means.
The visceral reaction to “kill your kids” comes from imagining that you’re actually killing them, not letting them go about a normal life. You can argue that it comes down to the same thing, but if they were really the same thing, you could use the less emotionally-loaded language.
What you’re saying: What kind of terrible parent lets their kids live a life slightly better than they had?
Mere framing, depending simply on what your brain thinks is normal. Visit a convention of cryonicists and talk to the kids signed up for cryonics. Those parents wouldn’t think very highly of themselves if they didn’t pay to sign up their kids. If their children died and were lost, they would hold themselves at fault. They’re right.
(The obvious metaphor—so obvious, in fact, that it is not even a metaphor—is withholding lifesaving medical care. Consider how we feel about parents who refuse to treat their kid’s cancer, for example.)
Yes, that is indeed the analogy—pardon me, classification—that I was looking for.
Huh? How about:
seems more fair.
Not quite. If my phrasing was confusing, try instead:
Exactly. Or “What kind of parent settles for letting their kids have merely a slightly better life than they had when a dramatically better life might be possible?”
The world is largely a pretty normal place. I’ve lived in Africa and Europe and have spent time in Central America and almost every type of place in the United States. I feel like I could begin to assess the risk to some extent.
What do I know about a future with alien minds? I thought it was you who argued that we can’t possibly know their motives and values.
(Take the horrible/awfulness of me wanting to kill my kids and project that onto the future society that might revive them. If it’s in me, why can’t it be in them?)
Your children are standing in front of the boat. You can send them on the boat. You can go with them on the boat. Or you can cut their throats. That’s it. There’s nothing else.
I hand you the knife.
What do you do?
I think I’m starting to understand what the absence of clicking is. People who click process problems as if they’re in the real world. If they wouldn’t cut their child’s throat, then they sign their kid up for cryonics.
People who don’t click don’t process the problem like it’s the real world. Strange reactions rise up in them, fears of the unknown, fears of the known, and they react to these fears by running away within the landscape of their minds, and somewhere on the outside words come out of their lips like “But who knows what will happen? How can I send my kids into that?” It’s an expression of that inner fear, an expression of that running away, words coming out of the lips that match up to what’s going on inside their heads somehow… the dread of losing control, the feeling of not understanding, the horror of thinking about mortality, all of these are expressed in a flinch away from the uncomfortable thought and put stumblingly into words.
So they kill their children, because they aren’t processing a real world, they’re processing words connected to words, ways of flinching and running away and giving vent to those odd internal feelings.
And the clickers are standing in front of that boat.
Yes, I’m not a “clicker”. I realize this wasn’t addressed to me, but about me, but I don’t see how this should make me feel ashamed or even inadequate. I need to make ethical/moral decisions and I have no choice but to think through them on my own and make my own decision. When I was 16, I was certain that Proof by Induction would not work, and ever since I understood that it did work, I’ve never claimed certainty based on intuition. However, some arrogance remains in that if something doesn’t convince me, I think: why should I be convinced, if I’m not convinced? I haven’t had any feedback from life that my ability to make decisions isn’t working. I have some problems, but they don’t seem related in any way to not clicking. (Well, maybe I need to “click” on you guys just being too culturally different from me.)
I wonder if in response to your hypothetical you expect a reasonable me to suddenly realize, “oh no! I would never kill them!” and thus find the contradiction in my far-mode reasoning about cryonics. But I would. (Filling in drastic and dire reasons for why the children were being taken on a boat against my will.) So would you, I think, slip a deadly but painless pill to a young boy about to be tortured and killed in a religious ceremony if you were certain it was going to happen. Perhaps you were trying to identify an ethical failing: that at one probability of risk I “let them” live, but at a higher level I arbitrarily, cruelly kill them. I don’t think even this is correct; I don’t know where to begin to know how to reason where the ‘killing’ probability would be, and don’t claim that I do. I only know that it would be an agonizing thing for a parent to ever have to decide, but one they can’t escape from just by glibly pretending such scenarios cannot happen, if the scenario does happen.
I submit that I’m an open-minded and curious person that isn’t afraid of new ideas. (I might be afraid of a lion, but I’m not afraid of thinking about lions.) One problem that I seem to have – though I actually like it – is that I tend to forget what my reasoning on any topic is after a while, and I’m more or less a blank slate again. If I have a negative view of cryonics, when I never even heard of it outside of LW, I think it is because I found some inconsistency in your own world view about it.
For example, it hadn’t really occurred to me at first that ‘somebody strange’ might revive my daughter. My concerns were “near-concerns” – how in the world would I ever get an ambulance in time, much less get her frozen in time, in this backwater place I live in where they aren’t even competent enough to insert a child catheter correctly? But then I read several times this suspiciously repetitive chant that ‘they’re not worried’ about negative-value futures because being revived would select for positive futures.
Well, that’s clearly not dependable optimism. We might get revived just because they want to cut down on energy costs in Arizona, and keeping 20 million people frozen takes a lot of power. Maybe they have a penchant for realistic theater and want to simulate the Holocaust with real non-genetically modified humans.
In my mind, previous to hearing the chant, was that all of these scenarios were unlikely because the world is normal. Obama and byrnema and Joe 6-pack and maybe Eliezer have children, and then their children have children, and then the children of these revive us and we live in a world that is essentially the same or somewhat better. But when I process people talking about the set of possible futures like it’s actually really large enough to include all kinds of horrors with non-negligible probability, then unwarranted optimism in the direction of the probability of something I or they know nothing about does not comfort me.
That is the outcome of the group applying epistemic hygiene to only arguments that lead to conclusions they disagree with. The bad arguments for the views they agree with, left untouched, will sway a person like me who does not think in a linear way, but organically assimilates assumptions and hypotheses as I encounter them.
Your description of not-clicking sounds functionally similar to what Amanda Baggs calls ‘widgets’, though she uses the term in a more political than personal context.
This. This so god-damn hard.
It looks to me like you have the choice between running a small risk of your daughter thinking you abandoned her (to a scary future that won’t leave you in a satisfactory family unit)… or running a slightly larger risk of actually abandoning her (to the gaping maw of death). The ideal is that she gets to be 18 without dying and then decides she wants to sign up, of course (and you and other relatives are still alive and ready to join her with stacks of paperwork at the ready), but we’re talking about managing risks, here, not the best case.
I hope you don’t mind the clarification, but I think you’ve underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I’m not too concerned about the satisfactory family unit, as long as my daughter is psychologically healthy.)
This compared to death, which is terrible for reasons other than “death”. Terrible because I will miss her and because of all the relationships disconnected and because her potential living this life won’t be fulfilled—nothing that cryonics will give back.
It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you/someone write something to help me relate to that?
I realize this is probably weird coming from me, considering my own cryonics hangup, but we’re already assuming they won’t revive anyone they can’t render passably physically healthy—I think they’d make some effort to take the same precautions regarding psychological health. My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory; generic needs for care and affection in a small child are so obvious I would be astounded if the future didn’t have an arrangement in place before they revived any frozen children.
I’ll try, but I’m not sure exactly what you mean by “the stream of consciousness” or “independent of relationships”. I value me (my software), I value you (your software), I prefer that these softwares be executed in pleasant environments rather than sitting around statically—but then, I’d probably cease to value my software in an awful hurry if it had no relationships with other software, and I’d respect a preference on your part to end your own software execution if that seemed to be your real and reasoned desire.
Why do I have these values? Well, people are just so darned special, that’s all I can say.
No it’s not. It’s just scary.
You really, really think that this, on the one hand, is “obvious”, but on the other hand, a superintelligence is going to look inside your head and go, “Huh, I just can’t figure that out.”
YOU ARE A SMALL CHILD. We all are. I know that, why can’t everyone see it?
I’m going to outright ignore you on this one. I have been met with incredulity, not mere curiosity (“Can you tell us more about the experiences you’ve had that let you model this extreme need?”), let alone commiseration (“wow, me too! let’s make friends and sign up together and solve each other’s problems!”) when I have described this need here. This tells me that what I have going on is really weird and nobody here has accurately modeled it. I do not think you can make predictions about this characteristic of mine when you are still so confused about it. A FAI probably could. You aren’t one. And since I know more about the phenomenon than you, I’m going to trust my predictions about what the FAI would say on inspecting my brain over yours. I think it’d say “wow, she would not hold up well without any loved ones nearby for longer than a few hours, unless I messed with her in ways she would not approve.”
You’re raving. Perhaps you are deficient in a vitamin or mineral.
I am not incredulous that you want friends! I am incredulous that you think not even a superintelligence could get them for you! This has nothing to do with you and your needs and your private inner life and everything to do with superintelligence! It wouldn’t even have to do anything creepy! Human beings are simply not that complicated!
Upvoted because: with that many exclamation points, how could you be wrong?
You think the best thing a FAI could do would be to throw up its hands and say, “welp, she’s screwed”?
Why not? There are likely problems we think are impossible that a superintelligence will be able to solve. But there are also likely problems we think impossible which turn out to actually be impossible.
I am very confident that an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone’s friend in a few hours. How often does this kind of thing happen by accident in kindergarten?
Impossibility should be reserved for things like FTL and reversal of entropy, not straightforward problems of human interaction.
Dude, creeeeeeeeeeepy.
That’s a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don’t you value your life? Why are you so willing to assume that super intelligence can’t think of any better solutions than you can?
In principle, I’m willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)
Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.
I think that people should be created by other persons who are motivated, at least in part, by an expectation to intrinsically value the person so created. If a FAI created a person for the express purpose of being my friend, it would presumably expect to value the person intrinsically, but that wouldn’t be its motivation in creating the person; its motivation in creating the person would have to do with valuing me. And if it modified its motivations to avoid annoying me in this way before it created the person, that would probably have other consequences on its actions that I wouldn’t care for, like motivating it to go around creating lots of persons left and right because people are just so darned intrinsically valuable and more are needed.
I’m sorry, but I’m going to have to call bollocks on this. Jesus Christ, don’t you want to live? Why aren’t you currently opting for euthanasia on the risk you end up friendless tomorrow?
Well, I probably won’t end up friendless tomorrow; and most of the mechanisms by which that could happen would not prohibit me from “opting for euthanasia”.
You probably won’t end up friendless in the event of a recovery from cryo storage. There is no reason you couldn’t chose to opt for euthanasia then either.
But in this case, it would be you that creates the person, with purpose of intrinsically valuing em, and the FAI is just a tool you use to do it.
If we modify the case so the FAI isn’t autonomously creating the person, but rather waking me up and quizzing me on what I want em to be like, a) I really doubt I could do that in a timely fashion, and b) I think the creepiness might prevent me from wanting to do it at all.
Would it be less creepy if the FAI found an existing person, out of the billions available, with whom you would be very likely to make friends in a few hours?
That would be fine, and the possibility has already been covered (it was described, I think, as “super-Facebook”) but I wouldn’t bet on it. Frankly, I’m not even sure I’m comfortable with the level of mind-reading the AI would have to do to implement any of these finer-tuned solutions. I like my mental privacy.
I’m not sure mind reading would be necessary. I hear Netflix does a pretty good job of guessing which movies people would like.
You like your mental privacy vis-a-vis an (effectively) omnipotent, perfectly moral being, more than you value your life?
*thinks*
I value the ability to consciously control which of my preferences are acted on that much. Mental privacy qua mental privacy, perhaps not.
You prefer that the hardware inside your head, with its known (and unknown) limitations compute your utility function, as opposed to internal to the aforementioned omniscient being? Why?
No. I’m software. My preferences stand even if you hypothetically implement me in silico.
No. Geez, can we drop the “utility functions” and all the other consequentialism debris for like a week sometime? It would be a welcome respite.
It’s a terminal value. We have a convention of not having to answer “why” about those.
Utility functions describe your preferences. Their existence doesn’t presuppose consequentialism, I don’t think.
Utility functions are actually an extreme of consequentialism; they state that your actions should not just be based on consequences, but a weighted probability distribution over outcomes.
In that case, how could you be said to have preferences about outcomes without being a consequentialist?
Can we not have preferences without a utility function?
Hmm… I think Eliezer might have overstated his case a little (for the lay audience). If you take a utility function to be normative with respect to your actions, it’s not merely descriptive of your preferences, for some meanings of “preference”—not including, I would think, the definition Eliezer would use.
Using more ordinary language, a Kantian might have preferences about the outcomes of his actions, but doesn’t think such preferences are the primary concern in what one ought to do.
Oh. Well, that’s not a distinction that seems terribly important to me. I’m happy to talk about “preferences” as being (necessarily) causally related to one’s actions.
There are a few things meant by “consequentialism”. It can be as general as “outcomes/consequences are what’s important when making decisions” to as specific as “Mill’s Utilitarianism”. The term was only coined mid-20th century and it’s not-very-technical jargon, so it hasn’t quite settled yet. I’m pretty sure the use here is more on the general side.
Other theories about what’s important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.
Why not, if they’re about preferences?
My understanding is that a utility function is nothing but a scaled preference ordering, and I interpret ethical debates as being disputes about what one’s preferences—i.e. one’s utility function—ought to be.
For example (to oversimplify and caricature): the “consequentialist” might argue that one should be willing to torture one person to save 1000 from certain death, while the “deontologist” argues that one should not because Torture is Wrong. Both sides of this argument are asserting preferences about the state of the world: the “consequentialist” assigns higher utility to the situation in which 1000 people are alive and you’re guilty of torture, and the “deontologist” assigns higher utility to the situation in which the 1000 have perished but your hands are clean.
This is called the “consequentialist doppelganger” phenomenon, when I’ve heard it described, and it’s very, very annoying to non-consequentialists. Yes, you can turn any ethical system into a consequentialism by applying the following transformation:
What would the world be like if everyone followed Non-Consequentialism X?
You should act to achieve the outcome yielded by Step 1.
But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.
I’m tempted to ask what kind of reasons could possibly fall into such a category—but we don’t have to have that discussion now unless you particularly want to.
Mainly, I just wanted to point out that when whoever-it-was above mentioned “your utility function”, you probably should have interpreted that as “your preferences”.
There should be a “Deontology for Consequentialists” post, if there isn’t already.
I might write that.
Perhaps I should write “Utilitarianism for Deontologists”. Here goes:
“Follow the maxim: ‘Maximize utility’”.
Actually, it was exactly the problems with this formulation that I was talking about in the pub with LessWrongers on Saturday. Consequentialism isn’t about maximizing anything; that’s a deontologist’s way of looking at it. Consequentialism says that if action A has a Y better outcome than action B, then action A is better than action B by Y. It follows that the best action is the one with the best outcome, but there isn’t some bright crown on the best action compared to which all other actions are dull and tarnished; other actions are worse to exactly the extent to which they bring about worse consequences, that’s all.
I’d like to see you write Virtue Ethics for Consequentialists, or for Deontologists.
“Being virtuous is obligatory, being vicious is forbidden.”
This feels like cheating.
“Do that which leads to people being virtuous.”
I don’t think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.
How about, “Maximize your virtue.”
So other people’s virtue is worth nothing?
Strictly, no. Virtue ethics is self-regarding that way. But it isn’t like virtue ethics says you shouldn’t care about other people’s virtue. It just isn’t calculated at that level of the theory. Helping other people be virtuous is the compassionate and generous thing to do.
Agreed, at least on the common (recent American) ethical egoist reading of virtue ethics.
Such a person is sometimes called a “Mad Bodhisattva”.
Certainly a way I’ve framed it in the past (and it sounds perfectly in line with the Confucian conception of virtue ethics) but I don’t think it’s quite right. At the very least, it’s worth mentioning that a lot of virtue ethicists don’t believe a theory of right action is appropriately part of virtue ethics.
Please do. I’d love to read it.
Ha! I was about to say, “I wonder if Alicorn might be interested in writing such a post”.
Not to butt in but “x is morally obligatory” is a perfectly good reason to do any x. That is the case where x is exhibiting some virtue, following some rule or maximizing some end.
You may run into problems trying to create a utility function for some forms of deontology, at least if you’re mapping into the real numbers. For instance, some deontologists would say that killing a person has infinite negative utility which can’t be cancelled out by any number of positive utility outcomes.
That wouldn’t be mapping into the real numbers, of course, since infinity isn’t a real number.
As I understand it, utility functions are supposed to be equivalence classes of mappings into the real numbers, where two such mappings are said to be equivalent if they are related by a (positive) affine transformation (x → ax + b where a>0).
Why do you think this restricts to positive affine transformations, rather than any strictly monotonic transformation?
Other monotonic transformations don’t preserve preferences over gambles.
Ah, right, that’s what I was missing. Thanks.
A strictly monotonic transformation will preserve your preference ordering of states but not your preference ordering for actions to achieve those states. That is, only affine transformations preserve the ordering of expected values of different actions.
Right, which is why I was saying that some ethical theories can’t be expressed by a utility function. And there could be many such incomparable qualities: even adding in infinity and negative infinity may not be enough (though the transfinite ordinals, or the surreal numbers, might be).
I’m surprised at that +b, because that doesn’t preserve utility ratios.
Ah, I see. But I’m still not actually sure that’s true, though...see below.
Indeed not; utilities are measured on an interval scale, not a ratio scale. There’s no “absolute zero”. (I believe Eliezer made a youthful mistake along these lines, IIRC.) This expresses the fact that utility functions are just (scaled) preference orderings.
You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences. This is a contradiction. Why are you currently endorsing stupid “terminal” values?
Wait, are you suggesting that I be uploaded into something with really excellent computational power so I myself would become a superintelligence? As opposed to an external agent that happened to be superintelligent? That might actually work. I will have to think about that. You could have been less rude in proposing it, though.
No. I am suggesting that the situation I described is what you would find in an FAI. You really should be deferring to Eliezer’s expertise in this case.
What about my statements was rude? How can I present these arguments without making you feel uncomfortable?
Then I don’t understand what you said.
I will not do that as long as he seems confused about the psychology he’s trying to predict things for.
I think calling my terminal values “stupid” was probably the most egregious bit. It is wise to avoid that word as applied to people and things they care about. I would appreciate it if people who want to help me would react with curiosity, not screeching incredulity and metaphorical tearing out of hair, when they find my statements about myself or other things puzzling or apparently inconsistent.
If he and I are confused, you are seriously failing to describe your situation. You are a human brain. Brains work by physical laws. Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.
I understand your antipathy for the word stupid. I shall try to avoid it in the future.
Yes, this is very likely. I don’t think I ever claimed that the problem wasn’t in how I was explaining myself; but a fact about my explanation isn’t a fact about the (poorly) explained phenomenon.
I can figure out how to fix the issues I have too: I’m in the process of befriending some more cryonics-friendly people. Why do people think this isn’t going to work? Or does it just seem like a bad way to approach the problem for some reason? Or do people think I won’t follow through on signing up should I acquire a suitable friend, even though I’ve offered to bet money on my being signed up within two years barring immense financial disaster?
Your second paragraph clears up my lingering misunderstandings; that was the missing piece of information for me. We were (or at least I was) arguing about a hypothetical situation instead of the actual situation. What you’re doing sounds perfectly reasonable to me.
If you are willing to take the 1 in 500 chance, my best wishes.
Where did that number come from and what does it refer to?
Actuarial tables, odds of death for a two year period for someone in their twenties (unless I misread the table, which is not at all impossible).
It’s really that likely? Can I see the tables? The number sounds too pessimistic to me.
http://www.socialsecurity.gov/OACT/STATS/table4c6.html
Looks like it should be 1/1000 for two years to me.
It should be around 1 in 400 for males in their 20s and 1 in 1000 for females in their 20s.
I like my mental privacy too, but I am OK with the idea of a non-sentient FAI reading my mind to better predict what it can do for me.
I don’t have much expectation of non-sentience in a sufficiently smart AI.
A “user-friendly” way to do this would be for the FAI to send an avatar/proxy to act as a guide when you wake up. Explain how things work, introduce you to others who you might enjoy the company off, answer any question you might have, help you get set up in a way that works for you, help you locate people who you know that might be alive, etc.
A FAI would know better than we do what we find creepy/uncomfortable/etc, and would probably avoid it as much as possible.
Nope. The best thing it could do would be retrieve my dead friends and family. But if we’re talking about whether I should sign up for cryonics, I’m assuming that’s the only way somebody gets to be not dead after having died a while ago. If we have an AI that’s so brilliant that it can reconstruct people accurately just by looking at the causal history of the universe and extrapolating backwards, I’m safe whether I sign up or not! And if we have one that can’t, I think I’m only safe if I am signed up with at least one loved one.
Out of curiosity—how accurate would the retrieval need to be? For instance, suppose the FAI accessed your memories and reconstructed your friends based on the information found there, extrapolating the bits you didn’t know. Obviously they wouldn’t be the same people, since the FAI had to make up a lot of stuff neither you nor it knew. But since the main model was a fit to your memories, they’d still seem just like your friends to you. Would you find that acceptable?
No. That would not be okay with me, assuming I knew this about the process.
My initial reaction is that I would really hate this. It’s one of the things that makes me really uneasy about extreme “neural archaeology”-style cryonics: I want an actual reconstruction, not just a plausible one.
You can think of no scenarios between those two that would entice you to sign up? Your arguments seem really specious to me.
Nope. You’re welcome to try, though, if you value my life and don’t want to try the “befriend me while signed up or on track to become so” route via which several wonderful people are helping.
I think the right context for Eliezer’s comment is Expected Creative Surprises.
Am I parsing this correctly? You’re intending to say that Alicorn isn’t really experiencing what she’s reporting that she is, but is instead just making it up to avoid acknowledging a fear of cryonics?
That’s fairly obviously wrong: If Alicorn really was scared of cryonics, the easiest thing for her to do would be to ignore the discussions, not try to solve her stated problem.
It’s also pretty offensive for you to keep suggesting that. Do you really think you’re in a better position to know about her than she’s in to know about herself? You’re implying a severe lack of insight on her part when you say things like that.
I am not suggesting that Alicorn is anything other than what she thinks she is.
But when she suggests that she has psychological problems a superintelligence can’t solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.
There simply isn’t anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, “a superintelligence couldn’t understand or handle my problems!” You get to say that to your friends, your sister, your mother, and certainly to me, but you don’t get to shout it at a superintelligence because that is silly.
Human brains just don’t have that kind of complicated in them.
I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.
I see at least one plausible case where an AI couldn’t solve the problem: All it takes is for none of Alicorn’s friends to be cryopreserved and for it to require significantly more than 5 hours for her brain to naturally perform the neurological changes involved in going from considering someone a stranger to considering them a friend. (I’m assuming that she’d consider speeding up that process to be an unacceptable brain modification. ETA: And that being asked if a particular solution would be acceptable is a significant part of making that solution acceptable, such that suggested solutions would not be acceptable if they hadn’t already been suggested. (This is true for me, but may not be similarly true for Alicorn.))
That’s a… nasty way to describe one of my thousand shards of desire that I want to ensure gets satisfied.
Your desire isn’t the problem. Maybe it was poorly phrased; “psychological challenge” or “psychological task for superintelligence to perform” or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.
It’s just a phrase. If someone isn’t being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.
This argument might have already gone on too long, but I’m going to try stating as what I see as your main objection to see if I actually understand your true objection.
You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you’ll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won’t want to accept any type of brain modification or enhancement that would make you not miserable. If you’re sufficiently miserable, it’s likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.
Insert “without my conscious, deliberate, informed consent, and ideally agency”.
Replace “you’ll probably” with “you are reasonably likely to”.
Add “with whom I could become sufficiently close within a brief and critical time period”.
See first adjustment. n.b.: without my already having been modified, the “informed” part would probably take longer than the brief, critical time period.
Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.
For certain subvalues of “that”, yes.
I like people too. :)
I agree with Eliezer that any benevolent reviver would be able to figure out how to create conditions that would make a child (and you) happy.
I definitely have in mind a non-benevolent reviver.
Consider this hypothetical situation:
Medical grade nanobots capable of rendering people immortal exist. They’re a one time injection that protect you from all disease forever. Do you and your family accept the treatment? If so, you’re essentially guaranteeing your family will survive until the singularity, at which point a malevolent singleton might take over the universe and do all sorts of nasty things to you.
I agree that cryonics is scarier than the hypothetical, but the issue at hand isn’t actually different.
Children are only helpless for about 10 years. If the singleton came within 10 years of my child being born without warning, it would be awful but not my fault. If I had any warning of it coming, and I still chose to have children that then came to harm, it would be my fault.
Why does fault matter?
Good question. The reason is because that this has recently become an ethical problem for me rather than an optimization problem. Perhaps that is why I think of it in far mode, if that is what I’m doing. But I do know that in ethical mode, it can be the case that you’re no longer allowed to base a decision on the computed “average value” … even small risks or compromises might be unacceptable. If I allow my child to come to harm, and I’m not allowed to do that, then it doesn’t matter what advantage I’m gambling for. I perceive at a certain age they can make their own decision, and then with relief I may sign them up for cryonics at their request.
Only if letting them die is worse.
First, I doubt that an future which would revive my child would be any worse than today. Second, my position is that cryonics can ameliorate the creation of a child, not obviate the inherent problems. I would ask you to read all of the replies about the preferably of cryo over dying—If it’s good enough for me, then it’s good enough for my child.