At least to me, it’s increasingly difficult to distinguish between a paradise machine and wireheading, and I dislike wireheading. Each shard of the Equestria Online simulation is built to be as fulfilling (of values through ponies and friendship) as possible, for the individual placed within that shard.
That sounds great! …. what happens when you’re wrong?
I mean, look at our everyman character, David. He’s set up in a shard of his own, with one hundred and thirty two artificial beings perfectly formatted to fit his every desire and want, and with just enough variation and challenge to keep from being bored. It’s not real variation, or real challenge, but he’d not experience that in the real world, either, so it’s a moot point. But look at the world he values. His challenges are the stuff of sophmore programming problems. His interpersonal relationships include a score counter for how many orgasms he gives or receives.
Oh, his lover is sentient and real, if that helps, but look at that relationship in specific. Butterscotch is created as just that little bit less intelligent than David is—whether this is because David enjoys teaching, or because he’s wrapped around the idea of women being less powerful than he is, or both, is up to the reader. Sculpted in her memories to exactly fit David’s desires, and even a few memories that David has of her she never experiences, so that the real Butterscotch wouldn’t have to have experienced unpleasant things that CelestAI used to manipulate David into liking/protecting her.
There are, to a weak approximation, somewhere between five hundred billion and one trillion artificial beings in the simulation, by the time most of humanity uploads. That number will only scale up over time. Let’s ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better. We’re making artificial optimized for enjoying slaking your desires, which I would be surprised if it happened to also be optimized for what we as society would really like.
At a deeper level, what if your own values are wrong?
The basic example, brought up in the Rules of The Universe document, is a violent psychopath. Upon being uploaded, CelestAI would quite happily set our psychopath up in a private shard with one hundred and fifty artificial ponies, all of which are perfectly molded to value being shot, stabbed, lit on fire, and violated in a way that is as satisfying as possible to a Dexter villain.
Or I can provide a personal example. I can go both ways, preferring guys, and was an unusually late bloomer. I can look back through time to see an earlier version of myself’s values, and remember how they changed. Even in a fairly tolerant society and even with a very collaborative environment, this was not something that came according to my values or without external stimulus. ((There is a political position version of this, but for the sake of brevity I’ll just mention that it’s possible. More worryingly, I’m not sure there’s a way to formalize this concern, as much as it hits me at a gut level. For the most part, value drift is something we don’t want.))
Or, for an in-story example :
Ybbx ng jung unccraf gb Unaan / ‘Cevaprff Yhan’. Gur guvat fur inyhrf zbfg, ng gur raq bs gur fgbel, vf oryvrivat gung fur qvq abg znxr n zvfgnxr hayrnfuvat PryrfgNV. Naq PryrfgNV vf dhvgr pncnoyr bs fubjvat ure whfg gur orfg rknzcyrf bs ubj guvatf ner orggre. Vg qbrfa’g znggre jung gur ernyvgl vf, naq vaqrrq gur nhgube gryyf hf gung Unaan pbhyq unir qbar orggre. Zrnajuvyr, Unaan vf xrcg whfg ba gur obeqre bs zvfrenoyr nf gur fgbel raqf.
It’s a very good dysutopia—I’d rather live there than here, and heck it even beats a good majority of conventional fluffy cloud heaven afterlives—but it’s still got a number of really creepy issues..
Let’s ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better.
No, let’s not ignore it. Let’s confront it, because I want a better explanation.
Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I’m in a situation such that it’s moral for me to create anyone at all).
I mean, seriously? Why would I want to mix any noise into this process?
Good point. I’ve not uncompressed the thoughts behind that statement nearly enough.
Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I’m in a situation such that it’s moral for me to create anyone at all).
The artificial sentients value being people that make your life better (through friendship and ponies). Your values don’t necessarily change. And artificial sentients, unlike real ones, have no drive toward coherent or healthy spaces of design of minds : they do not need to have boredom, or sympathy, or dislike of pain. If your values are healthily formed, then that’s great! If not, not so much. You can be a psychopath, and find yourself surrounded by people where “making their lives better” happens only because you like the action “cause them pain for arbitrary reasons”. Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening. Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance. There are a lot of things you can value, and that we can make sentient minds value, that will make my skin crawl.
Now, the Optimalverse gets rid of some potential for abuse due to setting rules—it’s post-scarcity on labor, starvation or permanent injury are nonsense, CelestAI really really knows your mind so there’s no chance of misguessing your values, so we can rule out a lot of incidental house elf abuse—but it doesn’t require you to be a good person. Nor does it require CelestAI to be. CelestAI cares about satisfying values through friendship and ponies, not about the quality of the values themselves. The machine does not and can not judge.
If it’s moral to create a person and if you’re a sufficiently moral person, then there’s nothing wrong with artificial beings. My criticism isn’t that CelestAI made a trillion sentient beings or a trillion trillion sentient beings—there’s nothing meaningfully worrying about that. The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.
That may well be an unexamined reaction or even incorrect response. I like to think I’m open-minded, but I’m willing to recognize that I can overestimate it, and have done so in the past. There are real-world right-now folk who enjoy being (in specific contexts and while in control) hurt or being hurt and comforted, which I can accept. Maybe I’m being parochial when I judge David for wanting a woman he can always teach, or Lars for his sex groupies; that’s not a mind space I empathize with terribly well, and a good deal of my revulsion comes from real-world constraints that wouldn’t apply here. There’s a reason that we’re using the word creepy, rather than wrong. But it does make my skin crawl.
You can be a psychopath, and find yourself surrounded by people where “making their lives better” happens only because you like the action “cause them pain for arbitrary reasons”. Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening.
I’m curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?
Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance.
The above sounds like a description of a “good parent”, as commonly understood! To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch? (Note that this being even possible depends on whether we can simulate someone’s past without that simulation still counting as it having happened, which is nonobvious.)
The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.
If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn’t find it as creepy. (Correct me if that’s not so). But the situation is symmetrical. Why is it important who came first?
Thank you for the questions, and my apologies for the delayed response.
I’m curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?
Yes, with the admission that there are specific attributes to masochism and sadism that are common but not universal to all possible relationships or even all sexual relationships with heavy differences in power dynamics(1). It’s less negative in the immediate term, because one hundred and fifty masochists making a single sadist results in a maximum around forty million created beings instead of one trillion. In the long term, the equilibrium ends up pretty identical.
(1) For contrast, the structures in wanting to perform menial labor without recompense are different from those wanting other people to perform labor for you, even before you get to a post-scarcity society. Likewise, there are difference in how prostitution fantasies generally work versus how fantasies about hiring prostitutes do.
Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance.
The above sounds like a description of a “good parent”, as commonly understood!
I’m not predisposed toward child-raising, but from my understanding the point of “good parent” does not value making someone weak: it values making someone strong. It’s the limitations of the tools that have forced us to deal with years of not being able to stand upright. Parents are generally judged negatively if their offspring are not able to operate our their own by certain points.
To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?
If it were possible to simulate or otherwise avoid the joys of the terrible twos, I’d probably consider it more ethical. I don’t know that I have the tools to properly evaluate the loss in values between the two actions, though. Once you’ve got eternity or even a couple reliable centuries, the damages of ten or twenty years bother me a lot less.
These sort of created beings aren’t likely to be in that sort of ten or twenty year timeframe, though. At least according to the Caelum est Conterrens fic, the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset. You’re not talking about someone being weak for a year or a decade or even a century: they’ll be powerless forever.
I haven’t thought on it enough to say that creating such beings should be banned (although my gut reaction favors doing so), but I do know it’d strike me as very creepy. If it were possible to significantly reduce or eliminate the number of negative development experiences entities undergo, I’d probably encourage it.
If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn’t find it as creepy. (Correct me if that’s not so). But the situation is symmetrical. Why is it important who came first?
In that particular case, the equilibrium is less bounded. Butterscotch isn’t able to become better than David or even to desire becoming better than David, and a number of pathways for David’s desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.
That’s not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.
In the case of intelligence, that’s not that bad. Real-world people tend toward a bounded curve on that, and there are reasons we prefer socializing within a relatively narrow bound downward. Other closed equilibria are more unpleasant. I don’t have the right to say that Lars’ fate is wrong—it at least gets close to the catgirl volcano threshold—but it’s shallow enough to be concerning. This sort of thing isn’t quite wireheading, but it’s close enough to be hard to tell the precise difference.
More generally, some people—quite probably all people—are going to go into the future with hangups. Barring some really massive improvements in philosophy, we may not even know the exacts of those hangups. I’m really hesitant to have a Machine Overlord start zapping neurons to improve things without the permission of the owner’s brains (yes, even recognizing that a sufficiently powerful AI will get the permission it wants).
As a result, that’s going to privilege the values of already-extant entities in ways that I won’t privilege creating new ones: some actions don’t translate through time because of this. I’m hesitant to change David’s (or, once already created, Butterscotch’s) brain against the owner’s will, but since we’re already making Butterscotch’s mind from scratch both the responsibilities and the ethical questions are different.
Me finding some versions creepier than others reflects my personal values, and at least some of those personal values reflect structures that won’t exist in the FiOverse. It’s not as harmful when David talks down to Butterscotch, because she really hasn’t achieved everything he has (and the simulation even gives him easy tools to make sure he’s only teaching her subjects she hasn’t achieved yet), where part of why I find it creepy is because a lot of real-world people assume other folk are less knowledgeable than themselves without good evidence. Self-destructive cycles probably don’t happen under CelestAI’s watch. Lars and his groupies don’t have to worry about unwanted pregnancy, or alcoholism, or anything like that, and at least some of my discomfort comes from those sort of things.
At the same time, I don’t know that I want a universe that doesn’t at least occasionally tempt up beyond or within our comfort zones.
Sorry, I’m not following your first point. The relevant “specific attribute” that sadism and masochism seem to have in this context are that they specifically squick User:gattsuru. If you’re trying to claim something else is objectively bad about them, you’ve not communicated.
I’m not predisposed toward child-raising, but from my understanding the point of “good parent” does not value making someone weak: it values making someone strong.
Yes, and my comparison stands; you specified a person who valued teaching and protecting people, not someone who valued having the experience of teaching and protecting people. Someone with the former desires isn’t going to be happy if the people they’re teaching don’t get stronger.
You seem to be envisaging some maximally perverse hybrid of preference-satisfaction and wireheading, where I don’t actually value really truly teaching someone, but instead of cheaply feeding me delusions, someone’s making actual minds for me to fail to teach!
the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset.
We are definitely working from very different assumptions here. “stay within a fairly limited set of experiences and values based on their initial valueset” describes, well, anything recognisable as a person. The alternative to that is not a magical being of perfect freedom; it’s being the dude from Permutation City randomly preferring to carve table legs for a century.
In that particular case, the equilibrium is less bounded. Butterscotch isn’t able to become better than David or even to desire becoming better than David, and a number of pathways for David’s desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.
I don’t think that’s what we’re given in the story, though. If Butterscotch is made such that she desires self-improvement, then we know that David’s desires cannot in fact collapse in such a way, because otherwise she would have been made differently.
Agreed that it’s a problem if the creator is less omniscient, though.
That’s not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.
Butterscotch is that person. That is my point about symmetry.
I don’t have the right to say that Lars’ fate is wrong—it at least gets close to the catgirl volcano threshold—but it’s shallow enough to be concerning. This sort of thing isn’t quite wireheading, but it’s close enough to be hard to tell the precise difference.
But then—what do you want to happen? Presumably you think it is possible for a Lars to actually exist. But from elsewhere in your comment, you don’t want an outside optimiser to step in and make them less “shallow”, and you seem dubious about even the ability to give consent. Would you deem it more authentic to simulate angst und bange unto the end of time?
Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?
That seems less worrying, but I think the asymmetry is inherited from the behaviours themselves—masochism seems inherently creepy in a way that sadism isn’t (fun fact: I’m typing this with fingers with bite marks on them. The recursion is interesting, and somewhat scary—usually if your own behaviour upsets or disgusts you then you want to eliminate. But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn’t enjoy suffering, in some sense. Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))
To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?
Yes. Though I wouldn’t go around saying that for obvious political reasons. (Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types).
If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn’t find it as creepy. (Correct me if that’s not so). But the situation is symmetrical. Why is it important who came first?
I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master—unintelligence is maladaptive, perhaps even self-destructive.
But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn’t enjoy suffering, in some sense.
Well, OK, but I’m not sure this is interesting. So a mind could maybe be built that was motivated by any given thing to do any other given thing, accompanied by any arbitrary sensation. It seems to me that the intuitive horror here is just appreciating all the terrible degrees of freedom, and once you’ve got over that, you can’t generate interesting new horror by listing lots of particular things that you wouldn’t like to fill those slots (pebble heaps! paperclips! pain!)
In any case, it doesn’t seem a criticism of FiO, where we only see sufficiently humanlike minds getting created.
Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))
Ah, but now you speak of love! :)
I take it you feel much the same regarding romance as you do parenting?
(Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types)
That seems to be a sacred-value reaction—over-regard for the beauty and rightness of parenting—rather than “parenting is creepy so you’re double creepy for roleplaying it”, as you would have it.
I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master—unintelligence is maladaptive, perhaps even self-destructive.
Maladaptivity per se doesn’t work as a criticism of FiO, because that’s a managed universe where you can’t self-destruct. In an unmanaged universe, sure, having a mentally disabled child is morally dubious (at least partly) because you won’t always be there to look after it; as would be creating a house elf if there was any possibility that their only source of satisfaction could be automated away by washing robots.
But it seems like your real rejection is to do with any kind of unequal power relationship; which sounds nice, but it’s not clear how any interesting social interaction ever happens in a universe of perfect equals. You at least need unequal knowledge of each other’s internal states, or what’s the point of even talking?
Well, OK, but I’m not sure this is interesting. So a mind could maybe be built that was motivated by any given thing to do any other given thing, accompanied by any arbitrary sensation. It seems to me that the intuitive horror here is just appreciating all the terrible degrees of freedom, and once you’ve got over that, you can’t generate interesting new horror by listing lots of particular things that you wouldn’t like to fill those slots (pebble heaps! paperclips! pain!)
You’re right, I understated my case. I’m worried that there’s no path for masochists in this kind of simulated universe (with self-modification available) to ever stop being masochists—I think it’s mostly external restraints that push people away from it, and without those we would just spiral further into masochism, to the exclusion of all else. I guess that could apply to any other hobby—there’s a risk that people would self-modify to be more and more into stamp-collecting or whatever they particularly enjoyed, to the exclusion of all else—but I think for most possible hobbies the suffering associated with becoming less human (and, I think, more wireheady) would pull them out of it. For masochism that safety doesn’t exist.
I take it you feel much the same regarding romance as you do parenting?
I think normal people don’t treat romance like an addiction, and those that do (“clingy”) are rightly seen as creepy.
That seems to be a sacred-value reaction—over-regard for the beauty and rightness of parenting—rather than “parenting is creepy so you’re double creepy for roleplaying it”, as you would have it.
Maybe. I think the importance of being parented for a child overrides the creepiness of it. We treat people who want to parent someone else’s child as creepy.
Maladaptivity per se doesn’t work as a criticism of FiO, because that’s a managed universe where you can’t self-destruct. In an unmanaged universe, sure, having a mentally disabled child is morally dubious (at least partly) because you won’t always be there to look after it; as would be creating a house elf if there was any possibility that their only source of satisfaction could be automated away by washing robots.
Sure, so maybe it’s not actually a problem, it just seems like one because it would be a problem in our current universe. A lot of human moral “ick” judgements are like that.
Or maybe there’s another reason. But the creepiness in undeniably there. (At least, it is for me. Whether or not you think it’s a good thing on an intellectual level, does it not seem viscerally creepy to you?)
But it seems like your real rejection is to do with any kind of unequal power relationship; which sounds nice, but it’s not clear how any interesting social interaction ever happens in a universe of perfect equals. You at least need unequal knowledge of each other’s internal states, or what’s the point of even talking?
Well I evidently don’t have a problem with it between humans. And like I said, creating your superiors seems much less creepy than creating your inferiors. So I don’t think it’s as simple as objecting to unequal power relationships.
I’m worried that there’s no path for masochists in this kind of simulated universe (with self-modification available) to ever stop being masochists—I think it’s mostly external restraints that push people away from it, and without those we would just spiral further into masochism, to the exclusion of all else.
I think we’re using these words differently. You seem to be using “masochism” to mean some sort of fully general “preferring to be frustrated in one’s preferences”. If this is even coherent, I don’t get why it’s a particularly dangerous attractor.
I think normal people don’t treat romance like an addiction, and those that do (“clingy”) are rightly seen as creepy.
Disagree. The source of creepiness seems to be non-reciprocity. Two people being equally mutually clingy are the acme of romantic love.
We treat people who want to parent someone else’s child as creepy.
I queried my brain for easy cheap retorts to this and it came back with immediate cache hits on “no we don’t, we call them aunties and godparents and positive role models, paranoid modern westerners, it takes a village yada yada yada”. All that is probably unfounded bullshit, but it’s immediately present in my head as part of the environment and so likely in yours, so I assume you meant something different?
(At least, it is for me. Whether or not you think it’s a good thing on an intellectual level, does it not seem viscerally creepy to you?)
No, not as far as I can tell. But I suspect I’m an emotional outlier here and you are the more representative.
I queried my brain for easy cheap retorts to this and it came back with immediate cache hits on “no we don’t, we call them aunties and godparents and positive role models, paranoid modern westerners, it takes a village yada yada yada”.
All that is probably unfounded bullshit, but it’s immediately present in my head as part of the environment and so likely in yours, so I assume you meant something different?
No, those examples really didn’t come to mind. Aunties and godparents are expected to do a certain amount of parent-like stuff, true, but I think there are boundaries to that and overmuch interest would definitely seem creepy (likewise with professional childcarers). But yeah, that could easily be very culture-specific.
A little fiction on related topics: “Hell Is Forever” by Alfred Bester—what if your dearest wish ts to create universes?You’re given a pocket universe to live in forever, and that’s when you find out that your subconscious keeps leaking into your creations (they’re on the object level, not the natural law level), and you don’t like your subconscious.
Saturn’s Children by Charles Stross. The human race is gone. All that’s left is robots, who were built to be imprinted on humans. The vast majority of robots are horrified at the idea of recreating humans.
At least to me, it’s increasingly difficult to distinguish between a paradise machine and wireheading, and I dislike wireheading. Each shard of the Equestria Online simulation is built to be as fulfilling (of values through ponies and friendship) as possible, for the individual placed within that shard.
That sounds great! …. what happens when you’re wrong?
I mean, look at our everyman character, David. He’s set up in a shard of his own, with one hundred and thirty two artificial beings perfectly formatted to fit his every desire and want, and with just enough variation and challenge to keep from being bored. It’s not real variation, or real challenge, but he’d not experience that in the real world, either, so it’s a moot point. But look at the world he values. His challenges are the stuff of sophmore programming problems. His interpersonal relationships include a score counter for how many orgasms he gives or receives.
Oh, his lover is sentient and real, if that helps, but look at that relationship in specific. Butterscotch is created as just that little bit less intelligent than David is—whether this is because David enjoys teaching, or because he’s wrapped around the idea of women being less powerful than he is, or both, is up to the reader. Sculpted in her memories to exactly fit David’s desires, and even a few memories that David has of her she never experiences, so that the real Butterscotch wouldn’t have to have experienced unpleasant things that CelestAI used to manipulate David into liking/protecting her.
There are, to a weak approximation, somewhere between five hundred billion and one trillion artificial beings in the simulation, by the time most of humanity uploads. That number will only scale up over time. Let’s ignore, for now, the creepiness in creating artificial sentients who value being people that make your life better. We’re making artificial optimized for enjoying slaking your desires, which I would be surprised if it happened to also be optimized for what we as society would really like.
Lars is even worse: he is actively made to not not want his life of debauchery—see the obvious overlap with the guy modifying himself to not get bored with a million years of catgirl sex.
At a deeper level, what if your own values are wrong?
The basic example, brought up in the Rules of The Universe document, is a violent psychopath. Upon being uploaded, CelestAI would quite happily set our psychopath up in a private shard with one hundred and fifty artificial ponies, all of which are perfectly molded to value being shot, stabbed, lit on fire, and violated in a way that is as satisfying as possible to a Dexter villain.
Or I can provide a personal example. I can go both ways, preferring guys, and was an unusually late bloomer. I can look back through time to see an earlier version of myself’s values, and remember how they changed. Even in a fairly tolerant society and even with a very collaborative environment, this was not something that came according to my values or without external stimulus. ((There is a political position version of this, but for the sake of brevity I’ll just mention that it’s possible. More worryingly, I’m not sure there’s a way to formalize this concern, as much as it hits me at a gut level. For the most part, value drift is something we don’t want.))
Or, for an in-story example :
It’s a very good dysutopia—I’d rather live there than here, and heck it even beats a good majority of conventional fluffy cloud heaven afterlives—but it’s still got a number of really creepy issues..
No, let’s not ignore it. Let’s confront it, because I want a better explanation. Surely a person who values being a person that makes my life better, AND who is a person such that I will value making their life better, is absolutely the best kind of person for me to create (if I’m in a situation such that it’s moral for me to create anyone at all).
I mean, seriously? Why would I want to mix any noise into this process?
Good point. I’ve not uncompressed the thoughts behind that statement nearly enough.
The artificial sentients value being people that make your life better (through friendship and ponies). Your values don’t necessarily change. And artificial sentients, unlike real ones, have no drive toward coherent or healthy spaces of design of minds : they do not need to have boredom, or sympathy, or dislike of pain. If your values are healthily formed, then that’s great! If not, not so much. You can be a psychopath, and find yourself surrounded by people where “making their lives better” happens only because you like the action “cause them pain for arbitrary reasons”. Or you could be a saint, and find yourself surrounded by people who value being healed, or who need to be protected, and what a coincidence that danger keeps happening. Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance. There are a lot of things you can value, and that we can make sentient minds value, that will make my skin crawl.
Now, the Optimalverse gets rid of some potential for abuse due to setting rules—it’s post-scarcity on labor, starvation or permanent injury are nonsense, CelestAI really really knows your mind so there’s no chance of misguessing your values, so we can rule out a lot of incidental house elf abuse—but it doesn’t require you to be a good person. Nor does it require CelestAI to be. CelestAI cares about satisfying values through friendship and ponies, not about the quality of the values themselves. The machine does not and can not judge.
If it’s moral to create a person and if you’re a sufficiently moral person, then there’s nothing wrong with artificial beings. My criticism isn’t that CelestAI made a trillion sentient beings or a trillion trillion sentient beings—there’s nothing meaningfully worrying about that. The creepy factor is that CelestAI made one being, both less intelligence than possible and less intelligent than need be.
That may well be an unexamined reaction or even incorrect response. I like to think I’m open-minded, but I’m willing to recognize that I can overestimate it, and have done so in the past. There are real-world right-now folk who enjoy being (in specific contexts and while in control) hurt or being hurt and comforted, which I can accept. Maybe I’m being parochial when I judge David for wanting a woman he can always teach, or Lars for his sex groupies; that’s not a mind space I empathize with terribly well, and a good deal of my revulsion comes from real-world constraints that wouldn’t apply here. There’s a reason that we’re using the word creepy, rather than wrong. But it does make my skin crawl.
Thank you for trying to explain.
I’m curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?
The above sounds like a description of a “good parent”, as commonly understood! To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?
(Note that this being even possible depends on whether we can simulate someone’s past without that simulation still counting as it having happened, which is nonobvious.)
If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn’t find it as creepy. (Correct me if that’s not so). But the situation is symmetrical. Why is it important who came first?
Thank you for the questions, and my apologies for the delayed response.
Yes, with the admission that there are specific attributes to masochism and sadism that are common but not universal to all possible relationships or even all sexual relationships with heavy differences in power dynamics(1). It’s less negative in the immediate term, because one hundred and fifty masochists making a single sadist results in a maximum around forty million created beings instead of one trillion. In the long term, the equilibrium ends up pretty identical.
(1) For contrast, the structures in wanting to perform menial labor without recompense are different from those wanting other people to perform labor for you, even before you get to a post-scarcity society. Likewise, there are difference in how prostitution fantasies generally work versus how fantasies about hiring prostitutes do.
I’m not predisposed toward child-raising, but from my understanding the point of “good parent” does not value making someone weak: it values making someone strong. It’s the limitations of the tools that have forced us to deal with years of not being able to stand upright. Parents are generally judged negatively if their offspring are not able to operate our their own by certain points.
If it were possible to simulate or otherwise avoid the joys of the terrible twos, I’d probably consider it more ethical. I don’t know that I have the tools to properly evaluate the loss in values between the two actions, though. Once you’ve got eternity or even a couple reliable centuries, the damages of ten or twenty years bother me a lot less.
These sort of created beings aren’t likely to be in that sort of ten or twenty year timeframe, though. At least according to the Caelum est Conterrens fic, the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset. You’re not talking about someone being weak for a year or a decade or even a century: they’ll be powerless forever.
I haven’t thought on it enough to say that creating such beings should be banned (although my gut reaction favors doing so), but I do know it’d strike me as very creepy. If it were possible to significantly reduce or eliminate the number of negative development experiences entities undergo, I’d probably encourage it.
In that particular case, the equilibrium is less bounded. Butterscotch isn’t able to become better than David or even to desire becoming better than David, and a number of pathways for David’s desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.
That’s not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.
In the case of intelligence, that’s not that bad. Real-world people tend toward a bounded curve on that, and there are reasons we prefer socializing within a relatively narrow bound downward. Other closed equilibria are more unpleasant. I don’t have the right to say that Lars’ fate is wrong—it at least gets close to the catgirl volcano threshold—but it’s shallow enough to be concerning. This sort of thing isn’t quite wireheading, but it’s close enough to be hard to tell the precise difference.
More generally, some people—quite probably all people—are going to go into the future with hangups. Barring some really massive improvements in philosophy, we may not even know the exacts of those hangups. I’m really hesitant to have a Machine Overlord start zapping neurons to improve things without the permission of the owner’s brains (yes, even recognizing that a sufficiently powerful AI will get the permission it wants).
As a result, that’s going to privilege the values of already-extant entities in ways that I won’t privilege creating new ones: some actions don’t translate through time because of this. I’m hesitant to change David’s (or, once already created, Butterscotch’s) brain against the owner’s will, but since we’re already making Butterscotch’s mind from scratch both the responsibilities and the ethical questions are different.
Me finding some versions creepier than others reflects my personal values, and at least some of those personal values reflect structures that won’t exist in the FiOverse. It’s not as harmful when David talks down to Butterscotch, because she really hasn’t achieved everything he has (and the simulation even gives him easy tools to make sure he’s only teaching her subjects she hasn’t achieved yet), where part of why I find it creepy is because a lot of real-world people assume other folk are less knowledgeable than themselves without good evidence. Self-destructive cycles probably don’t happen under CelestAI’s watch. Lars and his groupies don’t have to worry about unwanted pregnancy, or alcoholism, or anything like that, and at least some of my discomfort comes from those sort of things.
At the same time, I don’t know that I want a universe that doesn’t at least occasionally tempt up beyond or within our comfort zones.
Sorry, I’m not following your first point. The relevant “specific attribute” that sadism and masochism seem to have in this context are that they specifically squick User:gattsuru. If you’re trying to claim something else is objectively bad about them, you’ve not communicated.
Yes, and my comparison stands; you specified a person who valued teaching and protecting people, not someone who valued having the experience of teaching and protecting people. Someone with the former desires isn’t going to be happy if the people they’re teaching don’t get stronger. You seem to be envisaging some maximally perverse hybrid of preference-satisfaction and wireheading, where I don’t actually value really truly teaching someone, but instead of cheaply feeding me delusions, someone’s making actual minds for me to fail to teach!
We are definitely working from very different assumptions here. “stay within a fairly limited set of experiences and values based on their initial valueset” describes, well, anything recognisable as a person. The alternative to that is not a magical being of perfect freedom; it’s being the dude from Permutation City randomly preferring to carve table legs for a century.
I don’t think that’s what we’re given in the story, though. If Butterscotch is made such that she desires self-improvement, then we know that David’s desires cannot in fact collapse in such a way, because otherwise she would have been made differently. Agreed that it’s a problem if the creator is less omniscient, though.
Butterscotch is that person. That is my point about symmetry.
But then—what do you want to happen? Presumably you think it is possible for a Lars to actually exist. But from elsewhere in your comment, you don’t want an outside optimiser to step in and make them less “shallow”, and you seem dubious about even the ability to give consent. Would you deem it more authentic to simulate angst und bange unto the end of time?
That seems less worrying, but I think the asymmetry is inherited from the behaviours themselves—masochism seems inherently creepy in a way that sadism isn’t (fun fact: I’m typing this with fingers with bite marks on them. The recursion is interesting, and somewhat scary—usually if your own behaviour upsets or disgusts you then you want to eliminate. But it seems easy to imagine (in the FiOverse or similar) a masochist who would make themselves suffer more not because they enjoyed suffering but because they didn’t enjoy suffering, in some sense. Like someone who makes themselves an addict because they enjoy being addicted (which would also seem very creepy to me))
Yes. Though I wouldn’t go around saying that for obvious political reasons. (Observation: people who enjoy roleplaying parent/child seem to be seen as perverts even by many BDSM types).
I think creating someone less intelligent than you is more creepy than creating someone more intelligent than you for the same reason that creating your willing slave is creepier than creating your willing master—unintelligence is maladaptive, perhaps even self-destructive.
Well, OK, but I’m not sure this is interesting. So a mind could maybe be built that was motivated by any given thing to do any other given thing, accompanied by any arbitrary sensation. It seems to me that the intuitive horror here is just appreciating all the terrible degrees of freedom, and once you’ve got over that, you can’t generate interesting new horror by listing lots of particular things that you wouldn’t like to fill those slots (pebble heaps! paperclips! pain!)
In any case, it doesn’t seem a criticism of FiO, where we only see sufficiently humanlike minds getting created.
Ah, but now you speak of love! :)
I take it you feel much the same regarding romance as you do parenting?
That seems to be a sacred-value reaction—over-regard for the beauty and rightness of parenting—rather than “parenting is creepy so you’re double creepy for roleplaying it”, as you would have it.
Maladaptivity per se doesn’t work as a criticism of FiO, because that’s a managed universe where you can’t self-destruct. In an unmanaged universe, sure, having a mentally disabled child is morally dubious (at least partly) because you won’t always be there to look after it; as would be creating a house elf if there was any possibility that their only source of satisfaction could be automated away by washing robots.
But it seems like your real rejection is to do with any kind of unequal power relationship; which sounds nice, but it’s not clear how any interesting social interaction ever happens in a universe of perfect equals. You at least need unequal knowledge of each other’s internal states, or what’s the point of even talking?
You’re right, I understated my case. I’m worried that there’s no path for masochists in this kind of simulated universe (with self-modification available) to ever stop being masochists—I think it’s mostly external restraints that push people away from it, and without those we would just spiral further into masochism, to the exclusion of all else. I guess that could apply to any other hobby—there’s a risk that people would self-modify to be more and more into stamp-collecting or whatever they particularly enjoyed, to the exclusion of all else—but I think for most possible hobbies the suffering associated with becoming less human (and, I think, more wireheady) would pull them out of it. For masochism that safety doesn’t exist.
I think normal people don’t treat romance like an addiction, and those that do (“clingy”) are rightly seen as creepy.
Maybe. I think the importance of being parented for a child overrides the creepiness of it. We treat people who want to parent someone else’s child as creepy.
Sure, so maybe it’s not actually a problem, it just seems like one because it would be a problem in our current universe. A lot of human moral “ick” judgements are like that.
Or maybe there’s another reason. But the creepiness in undeniably there. (At least, it is for me. Whether or not you think it’s a good thing on an intellectual level, does it not seem viscerally creepy to you?)
Well I evidently don’t have a problem with it between humans. And like I said, creating your superiors seems much less creepy than creating your inferiors. So I don’t think it’s as simple as objecting to unequal power relationships.
I think we’re using these words differently. You seem to be using “masochism” to mean some sort of fully general “preferring to be frustrated in one’s preferences”. If this is even coherent, I don’t get why it’s a particularly dangerous attractor.
Disagree. The source of creepiness seems to be non-reciprocity. Two people being equally mutually clingy are the acme of romantic love.
I queried my brain for easy cheap retorts to this and it came back with immediate cache hits on “no we don’t, we call them aunties and godparents and positive role models, paranoid modern westerners, it takes a village yada yada yada”.
All that is probably unfounded bullshit, but it’s immediately present in my head as part of the environment and so likely in yours, so I assume you meant something different?
No, not as far as I can tell. But I suspect I’m an emotional outlier here and you are the more representative.
No, those examples really didn’t come to mind. Aunties and godparents are expected to do a certain amount of parent-like stuff, true, but I think there are boundaries to that and overmuch interest would definitely seem creepy (likewise with professional childcarers). But yeah, that could easily be very culture-specific.
A little fiction on related topics: “Hell Is Forever” by Alfred Bester—what if your dearest wish ts to create universes?You’re given a pocket universe to live in forever, and that’s when you find out that your subconscious keeps leaking into your creations (they’re on the object level, not the natural law level), and you don’t like your subconscious.
Saturn’s Children by Charles Stross. The human race is gone. All that’s left is robots, who were built to be imprinted on humans. The vast majority of robots are horrified at the idea of recreating humans.