Thanks! I’m always hungry for good sci-fi utopias :)
I particularly liked that mindmelding part.
After also reading Diaspora and ∀V, I was thinking what should be done about minds who self-modify themselves into insanity and suffer terribly. In their case, talking about consent doesn’t make much sense.
Maybe we could have a mechanism where:
I choose some people I trust the most, for example my partner, my mom, and my best friend
I give them the power to revert me back to my previous snapshot from before the modification, even if it’s against my insane will (but only if they unanimously agree)
(optionally) by old snapshot is temporarily revived to be the final arbiter and decide if I should be reverted—after all I know me the best
well, i worry about the ethics of the situation where those third parties don’t unanimously agree and you end up suffering. note that your past self, while it is a very close third party, is a third party among others.
i feel like i still wanna stick to my “sorry, you can’t go to sufficiently bad hell” limitation.
(also, surely whatever “please take me out of there if X” command you’d trust third parties with, you could simply trust Elua with, no?)
Yeah, unanimous may be too strong—maybe it would be better to have 2 out of 3 majority voting for example.
And I agree, my past self is a third party too.
Hm, yeah, trusting Elua to do it would work too. But in scenarios where we don’t have Elua, or have some “almost Elua” that I don’t fully trust, I’d rather rely on my trusted friends. And those scenarios are likely enough that it’s a good option to have.
(As I side note, I don’t think I can fully specify that “please take me out of there if X”. There may be some Xs which I couldn’t foresee, so I want to rely on those third party’s judgement, not some hard rules. (of course, sufficiently good Elua could make those judgements too))
As for that limitation, how would you imagine it? That some mind modifications are just forbidden? I have an intuition that there may be modifications so alien, that the only way to predict their consequences is to actually run that modified mind and see what happens. (an analogy may be, that even the most powerful being cannot predict if some Turing machine halts without actually running it). So maybe reverting is still necessary sometimes.
i feel like letting people try things, with the possibility of rollback from backup, generally works. let people do stuff by default, and when something looks like a person undergoing too much suffering, roll them back (or terminate them, or whatever other ethically viable outcome is closest to what they would want).
maybe pre-emptive “you can’t even try this” would only start making sense if there were concerns that too much experience-time is being filled with people accidentally ending up suffering from unpredictable modifications. (though i suspect i don’t really think this because i’m usually more negative-utilitarian and less average-utilitarian than that)
that said, i’ve never modified my mind in a way that caused me to experience significant suffering. i have a friend who kinda has, by taking LSD and then having a very bad time for the rest of the day, and today-them says they’re glad to have been able to try it. but i think LSD-day-them would strongly disagree.
I’d like the serious modifications to (at the very least) require a lot of effort to do. And be gradual, so you can monitor if you’re going in the right direction, instead of suddenly jumping into a new mindspace. And maybe even collectively decide to forbid some modifications.
The reason that I lean toward relying on my friends, not a godlike entity, is because on default I distrust centralized systems with enormous power. But if we had Elua which is as good as you depicted, I would be okay with that ;)
i tend to dislike such systems as well, but a correctly aligned superintelligence would surely be trustable with anything of the sort. if anything, it would at least know about the ways it could fail at this, and tell us about what it knows of those possibilities.
Thanks! I’m always hungry for good sci-fi utopias :) I particularly liked that mindmelding part.
After also reading Diaspora and ∀V, I was thinking what should be done about minds who self-modify themselves into insanity and suffer terribly. In their case, talking about consent doesn’t make much sense.
Maybe we could have a mechanism where:
I choose some people I trust the most, for example my partner, my mom, and my best friend
I give them the power to revert me back to my previous snapshot from before the modification, even if it’s against my insane will (but only if they unanimously agree)
(optionally) by old snapshot is temporarily revived to be the final arbiter and decide if I should be reverted—after all I know me the best
well, i worry about the ethics of the situation where those third parties don’t unanimously agree and you end up suffering. note that your past self, while it is a very close third party, is a third party among others.
i feel like i still wanna stick to my “sorry, you can’t go to sufficiently bad hell” limitation.
(also, surely whatever “please take me out of there if X” command you’d trust third parties with, you could simply trust Elua with, no?)
Yeah, unanimous may be too strong—maybe it would be better to have 2 out of 3 majority voting for example. And I agree, my past self is a third party too.
Hm, yeah, trusting Elua to do it would work too. But in scenarios where we don’t have Elua, or have some “almost Elua” that I don’t fully trust, I’d rather rely on my trusted friends. And those scenarios are likely enough that it’s a good option to have.
(As I side note, I don’t think I can fully specify that “please take me out of there if X”. There may be some Xs which I couldn’t foresee, so I want to rely on those third party’s judgement, not some hard rules. (of course, sufficiently good Elua could make those judgements too))
As for that limitation, how would you imagine it? That some mind modifications are just forbidden? I have an intuition that there may be modifications so alien, that the only way to predict their consequences is to actually run that modified mind and see what happens. (an analogy may be, that even the most powerful being cannot predict if some Turing machine halts without actually running it). So maybe reverting is still necessary sometimes.
i feel like letting people try things, with the possibility of rollback from backup, generally works. let people do stuff by default, and when something looks like a person undergoing too much suffering, roll them back (or terminate them, or whatever other ethically viable outcome is closest to what they would want).
maybe pre-emptive “you can’t even try this” would only start making sense if there were concerns that too much experience-time is being filled with people accidentally ending up suffering from unpredictable modifications. (though i suspect i don’t really think this because i’m usually more negative-utilitarian and less average-utilitarian than that)
that said, i’ve never modified my mind in a way that caused me to experience significant suffering. i have a friend who kinda has, by taking LSD and then having a very bad time for the rest of the day, and today-them says they’re glad to have been able to try it. but i think LSD-day-them would strongly disagree.
Yeah, that makes sense.
I’d like the serious modifications to (at the very least) require a lot of effort to do. And be gradual, so you can monitor if you’re going in the right direction, instead of suddenly jumping into a new mindspace. And maybe even collectively decide to forbid some modifications.
(btw, here is a great story about hedonic modification https://www.utilitarianism.com/greg-egan/Reasons-To-Be-Cheerful.pdf)
The reason that I lean toward relying on my friends, not a godlike entity, is because on default I distrust centralized systems with enormous power. But if we had Elua which is as good as you depicted, I would be okay with that ;)
thanks for the egan story, it was pretty good!
i tend to dislike such systems as well, but a correctly aligned superintelligence would surely be trustable with anything of the sort. if anything, it would at least know about the ways it could fail at this, and tell us about what it knows of those possibilities.