Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I’d be very happy to see a world such as in the first half of your story, and don’t think it would lead to the second half.
It seems that their conclusion was that no amount of happy moments for people could possibly outweigh the unimaginably large quantity of suffering in the universe required to sustain those tiny flickers of merely human happiness amid the combined agony of a googolplex or more fundamental energy transitions within a universal wavefunction. There is probably some irreducible level of energy transitions required to support anything like a subjective human experience, and (in the context of the story at least) the total cost in suffering for that would be unforgivably higher.
I don’t think the first half would definitely lead to the second half, but I can certainly see how it could.
I don’t think the idea is that happy moments are necessarily outweighed by suffering. It reads to me like it’s the idea that suffering is inherent in existence, not just for humans but for all life, combined with a kind of negative utilitarianism.
I think I would be very happy to see that first-half world, too. And depending on how we got it, yeah, it probably wouldn’t go wrong in the way this story portrays. But, the principles that generate that world might actually be underspecified in something like the ways described, meaning that they allow for multiple very different ethical frameworks and we couldn’t easily know in advance where such a world would evolve next. After all, Buddhism exists: Within human mindspace there is an attractor state for morality that aims at self-denial and cessation of consciousness as a terminal value. In some cases this includes venerating beings who vow to eternally intervene/remain in the world until everyone achieves such cessation; in others it includes honoring or venerating those who self-mummify through poisoning, dehydrating, and/or starving themselves.
Humans are very bad at this kind of self-denial in practice, except for a very small minority. AIs need not have that problem. Imagine if, additionally, they did not inherit the pacifism generally associated with Buddhist thought but instead believed, like medieval Catholics, in crusades, inquisitions, and forced conversion. If you train an AI on human ethical systems, I don’t know what combination of common-among-humans-and-good-in-context ideas it might end up generalizing or universalizing.
Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I’d be very happy to see a world such as in the first half of your story, and don’t think it would lead to the second half.
It seems that their conclusion was that no amount of happy moments for people could possibly outweigh the unimaginably large quantity of suffering in the universe required to sustain those tiny flickers of merely human happiness amid the combined agony of a googolplex or more fundamental energy transitions within a universal wavefunction. There is probably some irreducible level of energy transitions required to support anything like a subjective human experience, and (in the context of the story at least) the total cost in suffering for that would be unforgivably higher.
I don’t think the first half would definitely lead to the second half, but I can certainly see how it could.
The sequence description is: “Short stories about (implausible) AI dooms. Any resemblance to actual AI takeover plans is purely coincidental.“
I don’t think the idea is that happy moments are necessarily outweighed by suffering. It reads to me like it’s the idea that suffering is inherent in existence, not just for humans but for all life, combined with a kind of negative utilitarianism.
I think I would be very happy to see that first-half world, too. And depending on how we got it, yeah, it probably wouldn’t go wrong in the way this story portrays. But, the principles that generate that world might actually be underspecified in something like the ways described, meaning that they allow for multiple very different ethical frameworks and we couldn’t easily know in advance where such a world would evolve next. After all, Buddhism exists: Within human mindspace there is an attractor state for morality that aims at self-denial and cessation of consciousness as a terminal value. In some cases this includes venerating beings who vow to eternally intervene/remain in the world until everyone achieves such cessation; in others it includes honoring or venerating those who self-mummify through poisoning, dehydrating, and/or starving themselves.
Humans are very bad at this kind of self-denial in practice, except for a very small minority. AIs need not have that problem. Imagine if, additionally, they did not inherit the pacifism generally associated with Buddhist thought but instead believed, like medieval Catholics, in crusades, inquisitions, and forced conversion. If you train an AI on human ethical systems, I don’t know what combination of common-among-humans-and-good-in-context ideas it might end up generalizing or universalizing.