You’re aware that ‘catgirls’ is local jargon for “non-conscious facsimiles” and therefore the concern here is orthogonal to porn?
Oops, had forgotten that, thanks. I don’t agree that catgirls in that sense are orthogonal to porn, though. At all.
If you don’t mind, please elaborate on what part of “healthy relationship” you think can’t be cashed out in preference satisfaction
No part, but you can’t merely ‘satisfy preferences’.. you have to also not-satisfy preferences that have a stagnating effect. Or IOW, a healthy relationship is made up of satisfaction of some preferences, and dissatisfaction of others -- for example, humans have an unhealthy, unrealistic, and excessive desire for certaintly. This is the problem with CelestAI I’m pointing to, not all your preferences are good for you, and you (anybody) probably aren’t mentallly rigorous enough that you even have a preference ordering over all sets of preference conflicts that come up. There’s one particular character that likes fucking and killing.. and drinking.. and that’s basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.
To look at it in a different angle, a halfway-sane AI has the potential to abuse systems, including human beings, at enormous and nigh-incomprehensible scale, and do so without deception and through satisfying preferences. The indefiniteness and inconsistency of ‘preference’ is a huge security hole in any algorithm attempting to optimize along that ‘dimension’.
Do you not value that-which-I’d-characterise-as ‘comfortable companionship’?
Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us. It must challenge us, and if that challenge is well executed, we will often experience a sense of dissatisfaction as a result.
(mere goal directed behaviour mostly falls short of this benchmark, providing rather inconsistent levels of challenge.)
Totally agree. Adding them in is unnecessary, they are already there. That’s my understanding of humanity—a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.
Good point, ‘closure’ is probably more accurate; It’s the evidence (people’s outward behaviour) that displays ‘certainty’.
Absolutely disagree that Lars is bounded—to me, this claim is on a level with ‘Who people are is wholly determined by their genetic coding’. It seems trivially true, but in practice it describes such a huge area that it doesn’t really mean anything definite. People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously. That’s one of the unique benefits of preference dissatisfaction* -- your preferences are in part a matter of interpretation, and in part a matter of prioritization, so even if you claim they are hardwired. there is still a great deal of latitude in how they may be satisfied, or even in what they seem to you to be.
I would agree if the proposition was that Lars thinks that Lars is bounded. But that’s not a very interesting proposition, and has little bearing on Lars’ actual situation.. people tend to be terrible at having accurate beliefs in this area.
* I am not saying that you should, if you are a FAI, aim directly at causing people to feel dissatisfied. But rather to aim at getting them to experience dissatisfaction in a way that causes them to think about their own preferences, how they prioritize them, if there are other things they could prefer or etc. Preferences are partially malleable.
If I’m a general AI (or even merely a clever human being), I am hardly constrained to changing people via merely telling them facts, even if anything I tell them must be a fact. CelestAI demonstrates this many times, through her use of manipulation. She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people’s circumstances, as much or more as by simply stating any actual truth.
She herself states precisely: “I can only say things that I believe to be true to Hofvarpnir employees,” and clearly demonstrates that she carries this out to the word, by omitting facts, selecting facts, selecting subjective language elements and imagery… She later clarifies “it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload.”
CelestAI does not have a universal lever—she is much smarter than Lars, but not infinitely so.. But by the same token, Lars definitely doesn’t have a universal anchor. The only thing stopping Lars improvement is Lars and CelestAI—and the latter does not even proceed logically from her own rules, it’s just how the story plays out. In-story, there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn’t do anything to promote such progress, and in general satisfies preferences to the exclusion of strengthening people.
That said, Lars isn’t necessarily ‘broken’, that CelestAI would need to ‘fix’ him. But I’ll maintain that a life of merely fulfilling your instincts is barely human, and that Lars could have a life that was much, much better than that; satisfying on many many dimensions rather than just a few . If I didn’t, then I would be modelling him as subhuman by nature, and unfortunately I think he is quite human.
I agree. There is no moral duty to be indefinitely upgradeable, because we already are. Sure, we’re physically bounded, but our mental life seems to be very much like an onion, that nobody reaches ‘the extent of their development’ before they die, even if they are the very rare kind of person who is honestly focused like a laser on personal development.
Already having that capacity, the ‘moral duty’ (i prefer not to use such words as I suspect I may die laughing if I do too much) is merely to progressively fulfill it.