(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)
OK, I’ll have a go (will be incomplete).
People in general will find the Optimalverse unpleasant for a lot of reasons I’ll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.
People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.
(rot13 some bits that might consitute spoilers)
Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength,
Free to Optimize.
The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.
Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevrapr vf abj vzcbffvoyr gb rire ernyvfr.
Eryrinag: Hzz… znlor Value is Fragile? Abg dhvgr. Uryc zr bhg urer, thlf!
(nyfb, vafreg lbhe bja cersreerq snaqbz wbxr nobhg cbbe Ylen arire trggvat gb unir unaqf rgp.)
Nf lbh zragvbarq, gur jnl va juvpu gur NV’f cnegvphyne qrsvavgvba bs “uhzna” jnf abg evtug naq pna arire or zbqvsvrq, urapr nyvra ncbpnylcfrf. Eryrinag: The Hidden Complexity of Wishes
Themes that are more explicit after the extra worldbuilding in Caelum est Conterrens:
Zbqvslvat uhzna zvaqf va gur jnl gur hcybnqf ner qrfpevorq nf orvat zbqvsvrq vf ernyyl, ernyyl, ernyyl, ernyyl uneq, naq zvtug or vzcbffvoyr jvgubhg oernxvat crefbany pbagvahvgl Growing Up is Hard. (Guvf vf zber bs n ubeebe fbhepr guna na nethzrag, orpnhfr gur fgbel pna or ernq nf fgvchyngvat gung gur NV vf trggvat vg evtug).
ybbcvat raqyrffyl jvgu zrzbel biresybj (gung vf, va gur raq nyy yvirf snvy gur pbaqvgvbaf va Emotional Involvement ol orpbzvat n qvfpbaarpgrq frevrf bs rcvfbqrf)
I’m sympathetic to your position; this is the substance of my comment here that I think I understand what’s supposed to horrify me.
That comment of mine is no doubt wrong; there will be things that don’t horrify me that I didn’t even realise were supposed to.
There are quick and obvious comebacks to nearly all the above points. In a lot of cases, those quick comebacks are dealt with in the linked articles. Read the Fun Theory Sequence; it’s my favorite sequence, despite the fact that I disagree with more of it than any of the others.
Upvoted, but I’d like to request that you’d ROT13 either everything or nothing past a certain point. Being unable to just select all of it to be deciphered, and having to instead pick out a few pieces at a time, was mildly annoying.
Done, thanks for saying. I was trying to avoid thinking about the interaction between rot13 and links (leaving the anchor text un-rot13ed seems like acceptable practice?) but I should just have spent the extra two minutes.
Thanks! Much better now. :-) (As for the links, one can just paint over them as well and think “oh it was just some link” when they show up as garbled in the translation.)
Now that I’ve thought about your post I realized that the biggest question in this story is what the phrase “satisfy values” actually means. Currently it’s a pretty big hand wave in the story. Especially your first point seems to imply that we understood it a bit differently.
In my understanding, if I value real challenge, the possibility of things going badly, or even some level of pain, then the Optimalverse will somehow maximize those values and at least provide the feeling of real challenge and possibility of things going badly. And I don’t know why the Optimalverse couldn’t even provide the real thing. The way Light Sparks tries to pass the Intermediate Magic test seems an awfully lot like real challenge. Of course the Optimalverse wouldn’t allow you to die because in most cases the dislike of death overrides the longing for real challenge in the value system, but that still leaves a lot of options free. I got the impression that this is how it’s actually handled in the story. There’s this passage
Cbavrf unq ab cerqngbef; orvat ‘rngra’ ol n zbafgre va gur Rireserr sberfg whfg raqrq jvgu gur cbal va gur ubfcvgny va dhvgr n ovg bs cnva. Fngvfslvat inyhrf jnfa’g whfg nobhg unccvarff; univat zbafgref yrg cbavrf grfg gurve fgeratgu be oenirel. Rneyl ba, evtug nsgre gur pbairefvba bs Rnegu, n zrer sbhe uhaqerq cbavrf unq crgvgvbarq Cevaprff Pryrfgvn gb yrg gurz qvr, naq Cevaprff Pryrfgvn unq bayl nterrq gung qbvat fb jbhyq fngvfsl gurve inyhrf va rvtugl-fvk pnfrf. Abcbal unq qvrq va frireny Rdhrfgevna fhowrpgvir zvyyraavn.
Your second point is of course a real concern for some people, but personally it doesn’t feel very relevant. My actions don’t currently feel very important in the big scheme of things and I don’t know how a superintelligence would change things all that much. If I’m not personally doing anything important, then it doesn’t really matter to me if the important things are done by other humans or by a superintelligence. Anyway, this will always be a problem with AGI and if the AGI is friendly then the benefits outweight the negatives IMO. I think the alternative is worse.
The way I understood it is that the “ponies” in this story are essentially human in a pony disguise with four legs (two of them which can almost work like hands). A paragraph from the story:
V zbqvsvrq lbhe zbgbe pbegrk fb lbh pbhyq qrny jvgu lbhe arjsbhaq dhnqehcrqny zbirzrag, nybat jvgu bgure qvssreraprf orgjrra n uhzna naq cbal obql. V unir znqr gur zvavzny frg bs cbffvoyr punatrf; lbhe crefbanyvgl vf hapunatrq.
A big part of being human is due to our mind and hormones. Walking with two legs or being able to use hands extensively are more trivial points. If the psychology of a person doesn’t change in the transition from human to pony, then this eliminates most of the problems in your third point.
I haven’t read Caelum Est Conterrens and can’t fully comment on those points. But it seems that those are more like technicalities. I don’t know if it’s actually possible to turn a person into a pony without losing the person in the process. But if you’re not changing the brain parameters and the psychology doesn’t change in the process like it seems to be in this story then I would be inclined to say it’s possible. Clearly it can’t be worse for your identity than losing all your limbs or becoming a quadriplegic? Anyway, one of the axioms in this story seems to be that it’s possible.
I actually read the Fun theory sequence in its entirety before I read ‘Friendship is optimal’ and I thought FIO more faithful to the spirit of the sequence than 99% utopian stories out there. This is mostly because Celestia maximizes people’s values, not their happiness. This is a very vague concept, and a lot depends on how it’s implemented, but if it’s implemented the way I picture it, there shouldn’t be problems with things mentioned in High Challenge, Complex Novelty, Sensual Experience, Living By Your Own Strength, Free to Optimize, In Praise of Boredom, Interpersonal Entanglement and so on.
Of course, I have problems with applying things I read about to all my experiences, so it could be I misremember some things in the sequence or didn’t understand them correctly to begin with.
Clearly it can’t be worse for your identity than losing all your limbs or becoming a quadriplegic?
Well, this is not clear, though it might be true.
I have frequently had the experience of not doing anything with my left leg; losing the ability to ever do anything with my left leg means I’m prevented from ever doing anything with it. This is horrible, of course, but it’s the horror of being prevented from doing things I often choose not to do. Losing all my limbs is a more extreme version of the same thing.
Having different limbs might be more identity-distorting, by virtue of providing experiences that are completely unfamiliar.
Then again it might not.
For my own part, I’m not all that attached to preserving my current identity, so I’m not sure the question matters to me. If my choice is between an identity-altering pony body, and an identity-preserving quadriplegic body, I might well choose the former.
I read Caelum Est Conterrens, now I can better understand why some aspects of the scenario are a bit disconcerting if not horrifying. I find all the options loop immortality, ray immortality and exponential immortality kinda unpleasant, but maybe that is as good as it gets. Still, it feels like many of those things are not exclusive to this scenario, but are part of the world anyway.
Related to this, what did you think about the “normal” ending in the Three worlds collide?
From flaky memory, I think I find the Normal Ending far less acceptable than anything in the Optimalverse—one feels the premature truncation of human nature, rather than the natural exhaustion of it (or the choice to become inexhaustible) - but hey, maybe I’m inconsistent.
(Comment cosmetically edited in response to Kaj_Sotala, and again to replace a chunk of text that fell in a hole somewhere)
OK, I’ll have a go (will be incomplete).
People in general will find the Optimalverse unpleasant for a lot of reasons I’ll ignore; major changes to status quo, perceived incompatibility with non-reductionist worldviews, believing that a utopia is necessarily unpleasant or Omelas-like (a variant of this fallacy?), and lots of even messier things.
People on LessWrong may be thinking about portions of the Fun Theory Sequence that the Optimalverse conflicts with, and in some cases they may think that these conflicts destroy all of the value of the future, hence horror.
(rot13 some bits that might consitute spoilers)
Humans want things to go well, but they also want things to have been able to go badly, such that they made the difference. Relevant: Living By Your Own Strength, Free to Optimize.
The existence of a superintelligence makes human involvement superfluous, and humans do not want this to happen. Relevant: Amputation of Destiny.
Gur snpg gung gur NV vf pbafgenvarq gb fngvfsl uhzna inyhrf gur cbal jnl zrnaf gung n uhtr nzbhag bs cbffvoyr uhzna rkcrevrapr vf abj vzcbffvoyr gb rire ernyvfr. Eryrinag: Hzz… znlor Value is Fragile? Abg dhvgr. Uryc zr bhg urer, thlf! (nyfb, vafreg lbhe bja cersreerq snaqbz wbxr nobhg cbbe Ylen arire trggvat gb unir unaqf rgp.)
Nf lbh zragvbarq, gur jnl va juvpu gur NV’f cnegvphyne qrsvavgvba bs “uhzna” jnf abg evtug naq pna arire or zbqvsvrq, urapr nyvra ncbpnylcfrf. Eryrinag: The Hidden Complexity of Wishes
Themes that are more explicit after the extra worldbuilding in Caelum est Conterrens:
Zbqvslvat uhzna zvaqf va gur jnl gur hcybnqf ner qrfpevorq nf orvat zbqvsvrq vf ernyyl, ernyyl, ernyyl, ernyyl uneq, naq zvtug or vzcbffvoyr jvgubhg oernxvat crefbany pbagvahvgl Growing Up is Hard. (Guvf vf zber bs n ubeebe fbhepr guna na nethzrag, orpnhfr gur fgbel pna or ernq nf fgvchyngvat gung gur NV vf trggvat vg evtug).
Gjb cbffvoyr svany nggenpgbef sbe uhzna tebjgu ner cerfragrq (Ybbc naq Enl Vzzbegnyf):
ybbcvat raqyrffyl jvgu zrzbel biresybj (gung vf, va gur raq nyy yvirf snvy gur pbaqvgvbaf va Emotional Involvement ol orpbzvat n qvfpbaarpgrq frevrf bs rcvfbqrf)
qrcnegvat sebz gur uhznar inyhr senzrjbex, (“bhgtebjvat ybir”)
Fbzr urer ner abg fngvfsvrq jvgu rvgure naq ernyyl, ernyyl ubcr gurer vf n guveq jnl sbe uhznaf gb npuvrir haobhaqrq tebjgu gung erznvaf zrnavatshy (ol gurve yvtugf).
Notes:
I’m sympathetic to your position; this is the substance of my comment here that I think I understand what’s supposed to horrify me.
That comment of mine is no doubt wrong; there will be things that don’t horrify me that I didn’t even realise were supposed to.
There are quick and obvious comebacks to nearly all the above points. In a lot of cases, those quick comebacks are dealt with in the linked articles. Read the Fun Theory Sequence; it’s my favorite sequence, despite the fact that I disagree with more of it than any of the others.
Upvoted, but I’d like to request that you’d ROT13 either everything or nothing past a certain point. Being unable to just select all of it to be deciphered, and having to instead pick out a few pieces at a time, was mildly annoying.
Done, thanks for saying. I was trying to avoid thinking about the interaction between rot13 and links (leaving the anchor text un-rot13ed seems like acceptable practice?) but I should just have spent the extra two minutes.
Thanks! Much better now. :-) (As for the links, one can just paint over them as well and think “oh it was just some link” when they show up as garbled in the translation.)
Now that I’ve thought about your post I realized that the biggest question in this story is what the phrase “satisfy values” actually means. Currently it’s a pretty big hand wave in the story. Especially your first point seems to imply that we understood it a bit differently.
In my understanding, if I value real challenge, the possibility of things going badly, or even some level of pain, then the Optimalverse will somehow maximize those values and at least provide the feeling of real challenge and possibility of things going badly. And I don’t know why the Optimalverse couldn’t even provide the real thing. The way Light Sparks tries to pass the Intermediate Magic test seems an awfully lot like real challenge. Of course the Optimalverse wouldn’t allow you to die because in most cases the dislike of death overrides the longing for real challenge in the value system, but that still leaves a lot of options free. I got the impression that this is how it’s actually handled in the story. There’s this passage
Your second point is of course a real concern for some people, but personally it doesn’t feel very relevant. My actions don’t currently feel very important in the big scheme of things and I don’t know how a superintelligence would change things all that much. If I’m not personally doing anything important, then it doesn’t really matter to me if the important things are done by other humans or by a superintelligence. Anyway, this will always be a problem with AGI and if the AGI is friendly then the benefits outweight the negatives IMO. I think the alternative is worse.
The way I understood it is that the “ponies” in this story are essentially human in a pony disguise with four legs (two of them which can almost work like hands). A paragraph from the story:
A big part of being human is due to our mind and hormones. Walking with two legs or being able to use hands extensively are more trivial points. If the psychology of a person doesn’t change in the transition from human to pony, then this eliminates most of the problems in your third point.
I haven’t read Caelum Est Conterrens and can’t fully comment on those points. But it seems that those are more like technicalities. I don’t know if it’s actually possible to turn a person into a pony without losing the person in the process. But if you’re not changing the brain parameters and the psychology doesn’t change in the process like it seems to be in this story then I would be inclined to say it’s possible. Clearly it can’t be worse for your identity than losing all your limbs or becoming a quadriplegic? Anyway, one of the axioms in this story seems to be that it’s possible.
I actually read the Fun theory sequence in its entirety before I read ‘Friendship is optimal’ and I thought FIO more faithful to the spirit of the sequence than 99% utopian stories out there. This is mostly because Celestia maximizes people’s values, not their happiness. This is a very vague concept, and a lot depends on how it’s implemented, but if it’s implemented the way I picture it, there shouldn’t be problems with things mentioned in High Challenge, Complex Novelty, Sensual Experience, Living By Your Own Strength, Free to Optimize, In Praise of Boredom, Interpersonal Entanglement and so on.
Of course, I have problems with applying things I read about to all my experiences, so it could be I misremember some things in the sequence or didn’t understand them correctly to begin with.
Well, this is not clear, though it might be true.
I have frequently had the experience of not doing anything with my left leg; losing the ability to ever do anything with my left leg means I’m prevented from ever doing anything with it. This is horrible, of course, but it’s the horror of being prevented from doing things I often choose not to do. Losing all my limbs is a more extreme version of the same thing.
Having different limbs might be more identity-distorting, by virtue of providing experiences that are completely unfamiliar.
Then again it might not.
For my own part, I’m not all that attached to preserving my current identity, so I’m not sure the question matters to me. If my choice is between an identity-altering pony body, and an identity-preserving quadriplegic body, I might well choose the former.
Endorsed as a good summary.
I read Caelum Est Conterrens, now I can better understand why some aspects of the scenario are a bit disconcerting if not horrifying. I find all the options loop immortality, ray immortality and exponential immortality kinda unpleasant, but maybe that is as good as it gets. Still, it feels like many of those things are not exclusive to this scenario, but are part of the world anyway.
Related to this, what did you think about the “normal” ending in the Three worlds collide?
From flaky memory, I think I find the Normal Ending far less acceptable than anything in the Optimalverse—one feels the premature truncation of human nature, rather than the natural exhaustion of it (or the choice to become inexhaustible) - but hey, maybe I’m inconsistent.