I don’t know that I agree with the OP’s proposed basis for distinction, but I at least have a reasonable feel for what it would preclude. (I would even agree that, given clients substantially like modern-day humans, precluding that stuff is reasonably ethical. That said, the notion that a system on the scale the OP is discussing would have clients substantially like modern-day humans and relate to them in a fashion substantially like the fictional example given strikes me as incomprehensibly absurd.)
I don’t quite understand the basis for distinction you’re suggesting instead. I mean, I understand the specific examples you’re listing for exclusion, of course (eternal torture, lack of boredom, complete destruction), but not what they have in common or how I might determine whether, for example, choosing to be eternally alienated from friendship should be allowed or disallowed. Is that sufficiently horrifying? How could one tell?
I do understand that you don’t mean the system to prevent, say, my complete self-destruction as long as I can build the tools to destroy myself without the system’s assistance. The OP might agree with you about that, I’m not exactly sure. I suspect I disagree, personally, though I admit it’s a tricky enough question that a lot depends on how I frame it.
I don’t know that I agree with the OP’s proposed basis for distinction, but I at least have a reasonable feel for what it would preclude. (I would even agree that, given clients substantially like modern-day humans, precluding that stuff is reasonably ethical. That said, the notion that a system on the scale the OP is discussing would have clients substantially like modern-day humans and relate to them in a fashion substantially like the fictional example given strikes me as incomprehensibly absurd.)
I don’t quite understand the basis for distinction you’re suggesting instead. I mean, I understand the specific examples you’re listing for exclusion, of course (eternal torture, lack of boredom, complete destruction), but not what they have in common or how I might determine whether, for example, choosing to be eternally alienated from friendship should be allowed or disallowed. Is that sufficiently horrifying? How could one tell?
I do understand that you don’t mean the system to prevent, say, my complete self-destruction as long as I can build the tools to destroy myself without the system’s assistance. The OP might agree with you about that, I’m not exactly sure. I suspect I disagree, personally, though I admit it’s a tricky enough question that a lot depends on how I frame it.