Some of these questions, like the one about running away from a fire, ignore the role of irrational motivation.
People, when confronted with an immediate threat to their lives, gain a strong desire to protect themselves. This has nothing to do with a rational evaluation of whether or not death is better than life. Even people who genuinely want to commit suicide have this problem, which is one reason so many of them try methods that are less effective but don’t activate the self-defense system (like overdosing on pills instead of shooting themselves in the head). Perhaps even a suicidal person who’d entered the burning building because e planned to jump off the roof would still try to run out of the fire. So running away from a fire, or trying to stop a man threatening you with a sword, cannot be taken as proof of a genuine desire to live, only that any desire to die one might have is not as strong as one’s self-protection instincts.
It is normal for people to have different motivations in different situations. When I see and smell pizza, I get a strong desire to eat the pizza; right now, not seeing or smelling pizza, I have no particular desire to eat pizza. The argument “If your life was in immediate danger, you would want it to be preserved; therefore, right now you should seek out ways to preserve your life in the future, whether you feel like it or not” is similar to the argument “If you were in front of a sizzling piece of pizza, you would want to eat it; therefore, right now you should seek out pizza and eat it, whether you feel like it or not”.
Neither argument is inevitably wrong. But first you would have to prove that the urge comes from a reflectively stable value—something you “want to want”, and not just from an impulse that you “want” but don’t “want to want”.
The empirical reason I haven’t signed up for cryonics yet is that the idea of avoiding death doesn’t have any immediate motivational impact on me, and the negatives of cryonics—weirdness, costs in time and money, negative affect of being trapped in a dystopia—do have motivational impact on me. I admit this is weird and not what I would have predicted about my motivations if I were considering them in the third person, but empirically, that’s how things are.
I can use my willpower to overcome an irrational motivation or lack of motivation. But I only feel the need to do that in two cases. One, where I want to help other people (eg giving to charity even when I don’t feel motivated to do so). And two, when I predict I will regret my decision later (eg I may overcome akrasia to do a difficult task now when I would prefer to procrastinate). The first reason doesn’t really apply here, but the second is often brought out to support cryonics signup.
Many people who signal acceptance of death appear to genuinely go peacefully and happily—that is, even to the moment of dying they don’t seem motivated to avoid death. If this is standard, then I can expect to go my entire life without regretting the choice not to sign up for cryonics at any moment. After I die, I will be dead, and not regretting anything. So I expect to go all of eternity without regretting a decision not to sign up for cryonics. This leaves me little reason to overcome my inherent dismotivation to get it.
Some have argued that, when I am dead, it will be a pity, because I would be having so much more fun if I were still alive, so I ought to be regretful even though I’m not physically capable of feeling the actual emotion. But this sounds too much like the arguments for a moral obligation to create all potential people, which lead to the Repugnant Conclusion and which I oppose in just about all other circumstances.
That’s just what I’ve introspected as the empirical reasons I haven’t signed up for cryonics. I’m still trying to decide if I should accept the argument. And I’m guessing that as I get older I might start feeling more motivation to cheat death, at which point I’d sign up. And there’s a financial argument that if I’m going to sign up later, I might as well sign up now, though I haven’t yet calculated the benefits.
But analogies to running away from a burning building shouldn’t have anything to do with it.
Many people who signal acceptance of death appear to genuinely go peacefully and happily—that is, even to the moment of dying they don’t seem motivated to avoid death. If this is standard, then I can expect to go my entire life without regretting the choice not to sign up for cryonics at any moment. After I die, I will be dead, and not regretting anything. So I expect to go all of eternity without regretting a decision not to sign up for cryonics. This leaves me little reason to overcome my inherent dismotivation to get it.
[Bold added myself]
Is it accurate to say what I bolded? I know technically it’s true, but only because there isn’t any you to be doing the regretting. Death isn’t so much a state [like how I used to picture sitting in the ground for eternity] as much as simple non-existence [which is much harder to grasp, at least for me] And if you have no real issues not existing at a future point, why do you attempt to prolong your existence now? I don’t mean for this to be rude; I’m just curious as to why you would want to keep yourself around now if you’re not willing to stay around as long as life is still enjoyable.
On a fair note, I have not signed up for cryonics, but that’s mostly because I’m a college student with a lack of serious income.
By the way, I’m not here to troll, and I do have a serious question that doesn’t necessarily have to do with cryonics. The goal of SIAI (Lesswrong, etc) is to learn and possibly avoid a dystopian future. If you truly are worried about a dystopian future, then doesn’t that serve as a vote of “No confidence” for these initiatives?
Admittedly, I haven’t looked into your history, so that may be a “Well, duh” answer :)
I suppose it serves as a vote of less than infinite confidence. I don’t know if it makes me any less confident than SIAI themselves. It’s still worth helping SIAI in any way possible, but they’ve never claimed a 100% chance of victory.
Let’s say you’re about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don’t have much of a choice about which way you’re going, given that the “room” you’re currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own?
Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you’re putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).
No. AI isn’t a gun; it’s a bomb. If you don’t know what you’re doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.
A poorly put together gun is perfectly capable of crippling the wielder, and most bombs light enough to throw won’t reliably kill everyone in a room, especially a large room. Also, guns are harder to get right than bombs. That’s why, in military history, hand grenades and land mines came first, then muskets, then rifles, instead of just better and better grenades. That’s why the saying is “every Marine is a rifleman” and not “every Marine is a grenadier.”
A well-made Friendly AI would translate human knowledge and intent into precise, mechanical solutions to problems. You just look through the scope and decide when to pull the trigger, then it handles the details of implementation.
Also, you seem to have lost track of the positional aspect of the metaphor. The room outside represents the future; are you planning to stay behind in the garbage compactor?
So start with a quick sweep for functional-looking knives, followed by pieces of armor that look like they’d cover your skull or torso without falling off. No point to armor if it fails to protect you, or hampers your movements enough that you’ll be taking more hits from lost capacity to dodge than the armor can soak up.
If the walls don’t seem to have closed in much by the time you’ve got all that located and equipped, think about the junk you’ve already searched through. Optimistically, you may by this time have located several instances of the same model of gun with only one core problem each, in which case grab all of them and swap parts around (being careful not to drop otherwise good parts into the mud) until you’ve got at least one functional gun. Or, you may not have found anything that looks remotely like it could be converted into a useful approximation of a gun in the time available, in which case forget it and gather up whatever else you think could justify the effort of carrying it on your back.
Extending the metaphor, load-bearing gear is anything that lets you carry more of everything else with less discomfort. By it’s very nature, that kind of thing needs to be fitted individually for best results, so don’t just settle for a backpack or ‘supportive community’ that looks nice at arm’s length but aggravates your spine when you actually try it on, especially if it isn’t adjustable. If you’ve only found one or two useful items anyway, don’t even bother.
Medical supplies would be investments in maintaining your literal health as well as non-crisis-averting skills and resources, so you’re less likely to burn yourself out if one of those problems gets a grazing hit in. You should be especially careful to make sure that medical supplies you’re picking out of the garbage aren’t contaminated somehow.
Finally, a grenade would be any sort of clever political stratagem which could avert a range of related bad ends without much further work on your part, or else blow up in your face.
doesn’t that serve as a vote of “No confidence” for these initiatives?
For what initiatives? I don’t see any initiatives. And what is the “that” which is serving as a vote? By your sentence structure, “that” must refer to “worry”, but your question still doesn’t make any sense.
Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it’s almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it’s actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone—anyone—is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don’t keep them around just to feed them and stare at their bodies.
Anyways, moving on to the “initiatives” comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven’t looked too much into Yvain’s history. However, let’s suppose for the moment that he’s a strong supporter of that mission. Since we:
Can’t live in parallel universes
Live in a universe where even (seemingly) unrelated things are affected by each other.
Think A.I. may be a crucial element of a bad future, due to #1 and #2.
...I guess I was just wondering if he thought it’s a grim outlook for the mission. Signing up for cryonics seems to give a “glass half full” impression. Furthermore, due to #1 and #2 above, I’ll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.… and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I’m not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don’t know whether to interpret your original question as:
Doesn’t signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
Doesn’t not signing up indicate skepticism that SIAI will succeed?
Doesn’t signing up indicate skepticism that UFAI is something to worry about?
Doesn’t not signing up indicate skepticism regarding UFAI risk?
To be honest once again, I no longer care what you meant because you have made it clear that you don’t really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.
Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don’t ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you’re frustrated; I experience that frustration quite often once I realize I’m talking past someone. For whatever it’s worth, I left it open because the curious side of me didn’t want to limit Yvain; that curious side wanted to hear his thoughts in general. So… I guess both #2 and #3 (I’m not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn’t mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.
Also, thank you for being honest (admittedly, I was tempted to say, “So you weren’t being honest with your other posts?” but I decided to present that temptation passively inside these parentheses)
Ok, we’re cool. Regarding my own opinions/postings, I said I’m not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I’ll express that skepticism explicitly right now, since I’m thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
Sorry, I shouldn’t have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read “unfriendly” as “unFriendly” as “incompatible with our moral value systems”.
Please read my comment as follows:
What threshold of power difference do you object to? Do you object to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
I simply don’t understand why the question is being asked. I didn’t object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I’m not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.
How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?
Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?
What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a “monopoly”?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I’m not even convinced there’s a clear dividing line between taking someone over by “talking” (like the boxed AI) and taking them over by “force” (like nonconsensual brain surgery) -- the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
I’m not sure if you’re joking, but part of modern society is raising women’s status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin’s throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini’s ‘Influence’ and see the way humans are so predictably influenced in the mating dance. We don’t object to people influencing us with pheremones. Don’t complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.
But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.
As Pavitra said, there is not a clear dividing line here.
Although you’re right (except for the last sentence, which seems out of place), you didn’t actually answer the question, and I suspect that’s why you’re being downvoted here. Sub out “immoral” in Pavitra’s post for “dangerous and unfriendly” and I think you’ll get the gist of it.
To be honest, no, I don’t get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
I’m not saying that you didn’t express yourself precisely enough. I am saying that there is no such thing as “best (full stop)” There is “best for me”, there is “best for you”, but there is not “best for both of us”. No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.
Your argument above only works if “best” is interpreted as “best for every mind”. If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.
Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, … how shall I put this …, it doesn’t seem to cohere.
But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of “extrapolation” scares the sh.t out of me.
“Coherence” seems a bit like the human genome project. Yes there are many individual differences—but if you throw them all away, you are still left with something.
So we are going to build a giant AI to help us discover and distill that residue of humanity which is there after you discard the differences?
And here I thought that was the easy part, the part we had already figured out pretty well by ourselves.
And I’m not sure I care for the metaphor of “throwing away” the differences. Shouldn’t we instead be looking for practices and mechanisms that make use of those differences, that weave them into a fabric of resilience and mutual support rather than a hodgepodge of weakness and conflict?
“We”? You mean: you and me, baby? Or are you asking after a prediction about whether something like CEV will beat the other philosophies about what to do with an intelligent machine?
CEV is an alien document from my perspective. It isn’t like anything I would ever write.
It reminds me a bit of the ideal of democracy—where the masses have a say in running things.
I tend to see the world as more run by the government and its corporations—with democracy acting like a smokescreen for the voters—to give them an illusion of control, and to prevent them from revolting.
Also, technology has a long history of increasing wealth inequality—by giving the powerful controllers and developers of the technology ever more means of tracking and controlling those who would take away their stuff.
That sort of vision is not so useful as an election promise to help rally the masses around a cause—but then, I am not really a politician.
with democracy acting like a smokescreen for the voters—to give them an illusion of control, and to prevent them from revolting.
Voting prevents revolts in the same sense that a hydroelectric dam prevents floods. It’s not a matter of stopping up the revolutionary urge; in fact, any attempt to do so would be disastrous sooner or later. Instead it provides a safe, easy channel, and in the process, captures all the power of the movement before that flow can build up enough to cause damage.
The voters can have whatever they want, and the rest of the system does it’s best to stop them from wanting anything dangerous.
It wouldn’t form a utility function at all. It has no answer for any of the interesting or important questions: the questions on which there is disagreement. Or am I missing something here?
Ok, you are changing the analogy. Initially you said, throw away the differences. Now you are saying throw away all but one of them.
So our revised approximation of the CEV is the expressed volition of … Craig Venter?!
Would that horrify the vast majority of humanity? I think it might. Mostly because people just would not know how it would play out. People generally prefer the devil they know to the one they don’t.
Well, it was I who wrote that. The differences were thrown away in the genome project—but that isn’t exactly the corresponding thing according to the CEV proposal.
A certain lack of coherence doesn’t mean all the conflicting desires cancel out leaving nothing behind—thus the emphasis on still being “left with something”.
Jack: “I’ve got the Super Glue for Yvain. I’m on my way back.”
Chloe: “Hurry, Jack! I’ve just run the numbers! All of our LN2 suppliers were taken out by the dystopia!”
Freddie Prinze Jr: “Don’t worry, Chloe. I made my own LN2, and we can buy some time for Yvain. But I’m afraid the others will have to thaw out and die. Also, I am sorry for starring in Scooby Doo and getting us cancelled.”
- Jack blasts through wall, shoots Freddie, and glues Yvain back together -
Jack: “Welcome, Yvain. I am an unfriendly A.I. that decided it would be worth it just to revive you and go FOOM on your sorry ass.”
This is one of the worst examples that I’ve ever seen. Why would a paperclip maximizer want to revive someone so they could see the great paperclip transformation? Doing so uses energy that could be allocated to producing paperclips, and paperclip maximizers don’t care about most human values, they care about paperclips.
I think the issue is that the dystopia we’re talking about here isn’t necessarily paperclip maximizer land, which isn’t really a dystopia in the conventional sense, as human society no longer exists in such cases. What if it’s I Have No Mouth And I Must Scream instead?
Yes, the paper clip reference wasn’t the only point I was trying to make; it was just a (failed) cherry on top. I mainly took issue with being revived in the common dystopian vision: constant states of warfare, violence, and so on. It simply isn’t possible, given that you need to keep refilling dewars with LN2 and so much more; in other words, the chain of care would be disrupted, and you would be dead long before they found a way to resuscitate you.
And that leaves basically only a sudden “I Have No Mouth” scenario; i.e. one day it’s sunny, Alcor is fondly taking care of your dewar, and then BAM! you’ve been resuscitated by that A.I. I guess I just find it unlikely that such an A.I. will say: “I will find Yvain, resuscitate him, and torture him.” It just seems like a waste of energy.
(Jack emerges from paper clips and asks downvoter to explain how his/her scenario of being revived into a dystopia would work given a chain of constant care is needed)
(Until then, Jack will continue to be used to represent the absurdity of the scenario)
Some of these questions, like the one about running away from a fire, ignore the role of irrational motivation.
People, when confronted with an immediate threat to their lives, gain a strong desire to protect themselves. This has nothing to do with a rational evaluation of whether or not death is better than life. Even people who genuinely want to commit suicide have this problem, which is one reason so many of them try methods that are less effective but don’t activate the self-defense system (like overdosing on pills instead of shooting themselves in the head). Perhaps even a suicidal person who’d entered the burning building because e planned to jump off the roof would still try to run out of the fire. So running away from a fire, or trying to stop a man threatening you with a sword, cannot be taken as proof of a genuine desire to live, only that any desire to die one might have is not as strong as one’s self-protection instincts.
It is normal for people to have different motivations in different situations. When I see and smell pizza, I get a strong desire to eat the pizza; right now, not seeing or smelling pizza, I have no particular desire to eat pizza. The argument “If your life was in immediate danger, you would want it to be preserved; therefore, right now you should seek out ways to preserve your life in the future, whether you feel like it or not” is similar to the argument “If you were in front of a sizzling piece of pizza, you would want to eat it; therefore, right now you should seek out pizza and eat it, whether you feel like it or not”.
Neither argument is inevitably wrong. But first you would have to prove that the urge comes from a reflectively stable value—something you “want to want”, and not just from an impulse that you “want” but don’t “want to want”.
The empirical reason I haven’t signed up for cryonics yet is that the idea of avoiding death doesn’t have any immediate motivational impact on me, and the negatives of cryonics—weirdness, costs in time and money, negative affect of being trapped in a dystopia—do have motivational impact on me. I admit this is weird and not what I would have predicted about my motivations if I were considering them in the third person, but empirically, that’s how things are.
I can use my willpower to overcome an irrational motivation or lack of motivation. But I only feel the need to do that in two cases. One, where I want to help other people (eg giving to charity even when I don’t feel motivated to do so). And two, when I predict I will regret my decision later (eg I may overcome akrasia to do a difficult task now when I would prefer to procrastinate). The first reason doesn’t really apply here, but the second is often brought out to support cryonics signup.
Many people who signal acceptance of death appear to genuinely go peacefully and happily—that is, even to the moment of dying they don’t seem motivated to avoid death. If this is standard, then I can expect to go my entire life without regretting the choice not to sign up for cryonics at any moment. After I die, I will be dead, and not regretting anything. So I expect to go all of eternity without regretting a decision not to sign up for cryonics. This leaves me little reason to overcome my inherent dismotivation to get it.
Some have argued that, when I am dead, it will be a pity, because I would be having so much more fun if I were still alive, so I ought to be regretful even though I’m not physically capable of feeling the actual emotion. But this sounds too much like the arguments for a moral obligation to create all potential people, which lead to the Repugnant Conclusion and which I oppose in just about all other circumstances.
That’s just what I’ve introspected as the empirical reasons I haven’t signed up for cryonics. I’m still trying to decide if I should accept the argument. And I’m guessing that as I get older I might start feeling more motivation to cheat death, at which point I’d sign up. And there’s a financial argument that if I’m going to sign up later, I might as well sign up now, though I haven’t yet calculated the benefits.
But analogies to running away from a burning building shouldn’t have anything to do with it.
[Bold added myself]
Is it accurate to say what I bolded? I know technically it’s true, but only because there isn’t any you to be doing the regretting. Death isn’t so much a state [like how I used to picture sitting in the ground for eternity] as much as simple non-existence [which is much harder to grasp, at least for me] And if you have no real issues not existing at a future point, why do you attempt to prolong your existence now? I don’t mean for this to be rude; I’m just curious as to why you would want to keep yourself around now if you’re not willing to stay around as long as life is still enjoyable.
On a fair note, I have not signed up for cryonics, but that’s mostly because I’m a college student with a lack of serious income.
By the way, I’m not here to troll, and I do have a serious question that doesn’t necessarily have to do with cryonics. The goal of SIAI (Lesswrong, etc) is to learn and possibly avoid a dystopian future. If you truly are worried about a dystopian future, then doesn’t that serve as a vote of “No confidence” for these initiatives?
Admittedly, I haven’t looked into your history, so that may be a “Well, duh” answer :)
I suppose it serves as a vote of less than infinite confidence. I don’t know if it makes me any less confident than SIAI themselves. It’s still worth helping SIAI in any way possible, but they’ve never claimed a 100% chance of victory.
Thank you, Yvain. I quickly realized how dumb my question was, and so I appreciate that you took the time to make me feel better. Karma for you :)
Indeed, they have been careful not to present any estimates of the chance of victory (which I think is a wise decision.)
Let’s say you’re about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don’t have much of a choice about which way you’re going, given that the “room” you’re currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own?
Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you’re putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).
No. AI isn’t a gun; it’s a bomb. If you don’t know what you’re doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.
A poorly put together gun is perfectly capable of crippling the wielder, and most bombs light enough to throw won’t reliably kill everyone in a room, especially a large room. Also, guns are harder to get right than bombs. That’s why, in military history, hand grenades and land mines came first, then muskets, then rifles, instead of just better and better grenades. That’s why the saying is “every Marine is a rifleman” and not “every Marine is a grenadier.”
A well-made Friendly AI would translate human knowledge and intent into precise, mechanical solutions to problems. You just look through the scope and decide when to pull the trigger, then it handles the details of implementation.
Also, you seem to have lost track of the positional aspect of the metaphor. The room outside represents the future; are you planning to stay behind in the garbage compactor?
That’s the iffy part.
So start with a quick sweep for functional-looking knives, followed by pieces of armor that look like they’d cover your skull or torso without falling off. No point to armor if it fails to protect you, or hampers your movements enough that you’ll be taking more hits from lost capacity to dodge than the armor can soak up.
If the walls don’t seem to have closed in much by the time you’ve got all that located and equipped, think about the junk you’ve already searched through. Optimistically, you may by this time have located several instances of the same model of gun with only one core problem each, in which case grab all of them and swap parts around (being careful not to drop otherwise good parts into the mud) until you’ve got at least one functional gun. Or, you may not have found anything that looks remotely like it could be converted into a useful approximation of a gun in the time available, in which case forget it and gather up whatever else you think could justify the effort of carrying it on your back.
Extending the metaphor, load-bearing gear is anything that lets you carry more of everything else with less discomfort. By it’s very nature, that kind of thing needs to be fitted individually for best results, so don’t just settle for a backpack or ‘supportive community’ that looks nice at arm’s length but aggravates your spine when you actually try it on, especially if it isn’t adjustable. If you’ve only found one or two useful items anyway, don’t even bother.
Medical supplies would be investments in maintaining your literal health as well as non-crisis-averting skills and resources, so you’re less likely to burn yourself out if one of those problems gets a grazing hit in. You should be especially careful to make sure that medical supplies you’re picking out of the garbage aren’t contaminated somehow.
Finally, a grenade would be any sort of clever political stratagem which could avert a range of related bad ends without much further work on your part, or else blow up in your face.
For what initiatives? I don’t see any initiatives. And what is the “that” which is serving as a vote? By your sentence structure, “that” must refer to “worry”, but your question still doesn’t make any sense.
Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it’s almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it’s actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone—anyone—is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don’t keep them around just to feed them and stare at their bodies.
Anyways, moving on to the “initiatives” comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven’t looked too much into Yvain’s history. However, let’s suppose for the moment that he’s a strong supporter of that mission. Since we:
Can’t live in parallel universes
Live in a universe where even (seemingly) unrelated things are affected by each other.
Think A.I. may be a crucial element of a bad future, due to #1 and #2.
...I guess I was just wondering if he thought it’s a grim outlook for the mission. Signing up for cryonics seems to give a “glass half full” impression. Furthermore, due to #1 and #2 above, I’ll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.… and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I’m not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don’t know whether to interpret your original question as:
Doesn’t signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
Doesn’t not signing up indicate skepticism that SIAI will succeed?
Doesn’t signing up indicate skepticism that UFAI is something to worry about?
Doesn’t not signing up indicate skepticism regarding UFAI risk?
To be honest once again, I no longer care what you meant because you have made it clear that you don’t really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.
Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don’t ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you’re frustrated; I experience that frustration quite often once I realize I’m talking past someone. For whatever it’s worth, I left it open because the curious side of me didn’t want to limit Yvain; that curious side wanted to hear his thoughts in general. So… I guess both #2 and #3 (I’m not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn’t mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.
Also, thank you for being honest (admittedly, I was tempted to say, “So you weren’t being honest with your other posts?” but I decided to present that temptation passively inside these parentheses)
:)
Ok, we’re cool. Regarding my own opinions/postings, I said I’m not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I’ll express that skepticism explicitly right now, since I’m thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.
But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?
No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children …
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
When did I become the enemy?
Sorry, I shouldn’t have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read “unfriendly” as “unFriendly” as “incompatible with our moral value systems”.
Please read my comment as follows:
I simply don’t understand why the question is being asked. I didn’t object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I’m not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.
How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?
Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?
What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a “monopoly”?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
Sorry.
I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I’m not even convinced there’s a clear dividing line between taking someone over by “talking” (like the boxed AI) and taking them over by “force” (like nonconsensual brain surgery) -- the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
Touche’
It probably won’t do what you want. It is somehow based on the mass of humanity—and not just on you. Think: committee.
...or until some “unfriendly” aliens arrive to eat our lunch—whichever comes first.
Naturally. Low status people could use them!
I’m not sure if you’re joking, but part of modern society is raising women’s status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin’s throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
Unpack, please?
Sure.
Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini’s ‘Influence’ and see the way humans are so predictably influenced in the mating dance. We don’t object to people influencing us with pheremones. Don’t complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.
But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.
As Pavitra said, there is not a clear dividing line here.
We can’t let people we don’t like gain the ability to mate with people we like!
I see. Hmmm. Oh dear, look at the time. Have to go. Sorry to walk out on you two, but I really must go. Bye-bye.
Although you’re right (except for the last sentence, which seems out of place), you didn’t actually answer the question, and I suspect that’s why you’re being downvoted here. Sub out “immoral” in Pavitra’s post for “dangerous and unfriendly” and I think you’ll get the gist of it.
To be honest, no, I don’t get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
Stop talking to each other!
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
The phrase “the best of all possible worlds” ought to be the canonical example of the Mind Projection Fallacy.
It would be unreasonably burdensome to append “with respect to a given mind” to every statement that involves subjectivity in any way.
ETA: For comparison, imagine if you had to say “with respect to a given reference frame” every time you talked about velocity.
I’m not saying that you didn’t express yourself precisely enough. I am saying that there is no such thing as “best (full stop)” There is “best for me”, there is “best for you”, but there is not “best for both of us”. No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.
Your argument above only works if “best” is interpreted as “best for every mind”. If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.
ETA: What given frame do you have in mind??????
The usual assumption in this context would be CEV. Are you saying you strongly expect humanity’s extrapolated volition not to cohere?
Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, … how shall I put this …, it doesn’t seem to cohere.
But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of “extrapolation” scares the sh.t out of me.
“Coherence” seems a bit like the human genome project. Yes there are many individual differences—but if you throw them all away, you are still left with something.
So we are going to build a giant AI to help us discover and distill that residue of humanity which is there after you discard the differences?
And here I thought that was the easy part, the part we had already figured out pretty well by ourselves.
And I’m not sure I care for the metaphor of “throwing away” the differences. Shouldn’t we instead be looking for practices and mechanisms that make use of those differences, that weave them into a fabric of resilience and mutual support rather than a hodgepodge of weakness and conflict?
“We”? You mean: you and me, baby? Or are you asking after a prediction about whether something like CEV will beat the other philosophies about what to do with an intelligent machine?
CEV is an alien document from my perspective. It isn’t like anything I would ever write.
It reminds me a bit of the ideal of democracy—where the masses have a say in running things.
I tend to see the world as more run by the government and its corporations—with democracy acting like a smokescreen for the voters—to give them an illusion of control, and to prevent them from revolting.
Also, technology has a long history of increasing wealth inequality—by giving the powerful controllers and developers of the technology ever more means of tracking and controlling those who would take away their stuff.
That sort of vision is not so useful as an election promise to help rally the masses around a cause—but then, I am not really a politician.
Voting prevents revolts in the same sense that a hydroelectric dam prevents floods. It’s not a matter of stopping up the revolutionary urge; in fact, any attempt to do so would be disastrous sooner or later. Instead it provides a safe, easy channel, and in the process, captures all the power of the movement before that flow can build up enough to cause damage.
The voters can have whatever they want, and the rest of the system does it’s best to stop them from wanting anything dangerous.
But would that something form a utility function that wouldn’t be deeply horrifying to the vast majority of humanity?
It wouldn’t form a utility function at all. It has no answer for any of the interesting or important questions: the questions on which there is disagreement. Or am I missing something here?
In the human genome project analogy, they wound up with one person’s DNA.
Humans have various eye colours—and the sequence they wound up with seems likely to have some eye colour or another.
Ok, you are changing the analogy. Initially you said, throw away the differences. Now you are saying throw away all but one of them.
So our revised approximation of the CEV is the expressed volition of … Craig Venter?!
Would that horrify the vast majority of humanity? I think it might. Mostly because people just would not know how it would play out. People generally prefer the devil they know to the one they don’t.
FWIW, it wasn’t really Craig Venter, but a combination of multiple people—see:
http://en.wikipedia.org/wiki/Human_Genome_Project#Genome_donors
No, I agree. I just don’t understand where you were going when you emphasized that
The guy who wrote and emphasized that was timtyler—It wasn’t me
The anti-kibitzer is more confusing than I realized.
Well, it was I who wrote that. The differences were thrown away in the genome project—but that isn’t exactly the corresponding thing according to the CEV proposal.
A certain lack of coherence doesn’t mean all the conflicting desires cancel out leaving nothing behind—thus the emphasis on still being “left with something”.
I’m looking at the same document you are, and I actually agree that EV almost certainly ~C. I just wanted to make sure the assumption was explicit.
Jack: “I’ve got the Super Glue for Yvain. I’m on my way back.”
Chloe: “Hurry, Jack! I’ve just run the numbers! All of our LN2 suppliers were taken out by the dystopia!”
Freddie Prinze Jr: “Don’t worry, Chloe. I made my own LN2, and we can buy some time for Yvain. But I’m afraid the others will have to thaw out and die. Also, I am sorry for starring in Scooby Doo and getting us cancelled.”
- Jack blasts through wall, shoots Freddie, and glues Yvain back together -
Jack: “Welcome, Yvain. I am an unfriendly A.I. that decided it would be worth it just to revive you and go FOOM on your sorry ass.”
(Jack begins pummeling Yvain)
(room suddenly fills up with paper clips)
This is one of the worst examples that I’ve ever seen. Why would a paperclip maximizer want to revive someone so they could see the great paperclip transformation? Doing so uses energy that could be allocated to producing paperclips, and paperclip maximizers don’t care about most human values, they care about paperclips.
That was a point I was trying to make ;)
I should have ended off with (/sarcasm)
I think the issue is that the dystopia we’re talking about here isn’t necessarily paperclip maximizer land, which isn’t really a dystopia in the conventional sense, as human society no longer exists in such cases. What if it’s I Have No Mouth And I Must Scream instead?
Yes, the paper clip reference wasn’t the only point I was trying to make; it was just a (failed) cherry on top. I mainly took issue with being revived in the common dystopian vision: constant states of warfare, violence, and so on. It simply isn’t possible, given that you need to keep refilling dewars with LN2 and so much more; in other words, the chain of care would be disrupted, and you would be dead long before they found a way to resuscitate you.
And that leaves basically only a sudden “I Have No Mouth” scenario; i.e. one day it’s sunny, Alcor is fondly taking care of your dewar, and then BAM! you’ve been resuscitated by that A.I. I guess I just find it unlikely that such an A.I. will say: “I will find Yvain, resuscitate him, and torture him.” It just seems like a waste of energy.
Upvoted for making a comment that promotes paperclips.
(Jack emerges from paper clips and asks downvoter to explain how his/her scenario of being revived into a dystopia would work given a chain of constant care is needed)
(Until then, Jack will continue to be used to represent the absurdity of the scenario)