“This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!”
You keep dodging the point lol… As someone with some experience with drugs, I can tell you that it’s not fun. Human happiness is way subjective and doesn’t depend on a single chemical. For instance, some people love MDMA, others (like me) find it a too intense, too chemical, too fabricated happiness. A forced lifetime on MDMA would be some of the worst tortures I can imagine. It would fry you up. But even a very controlled dopamine drip wouldn’t be good. But anyway, I know you’re probably trolling, so just consider good old-fashioned torture in a dark dungeon instead...
On Paul: yes, he’s wrong, that’s how.
″ I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.”
True, except that, on that basis alone, you have no idea how that would happen and what would it imply for those new minds (and old ones), since you’re not a digital superintelligence.
“This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!”
You keep dodging the point lol… As someone with some experience with drugs, I can tell you that it’s not fun. Human happiness is way subjective and doesn’t depend on a single chemical. For instance, some people love MDMA, others (like me) find it a too intense, too chemical, too fabricated happiness. A forced lifetime on MDMA would be some of the worst tortures I can imagine. It would fry you up. But even a very controlled dopamine drip wouldn’t be good. But anyway, I know you’re probably trolling, so just consider good old-fashioned torture in a dark dungeon instead...
On Paul: yes, he’s wrong, that’s how.
″ I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.”
True, except that, on that basis alone, you have no idea how that would happen and what would it imply for those new minds (and old ones), since you’re not a digital superintelligence.