another crucial consideration here is that a benevolent ASI could do acausal trade to reduce suffering in the unreachable universe.[1] (comparing the EV of that probability and of the probability of human-caused long-term-suffering is complex / involves speculation about the many variables going into each side)
(was this written by chatgpt?)
another crucial consideration here is that a benevolent ASI could do acausal trade to reduce suffering in the unreachable universe.[1] (comparing the EV of that probability and of the probability of human-caused long-term-suffering is complex / involves speculation about the many variables going into each side)
there’s writing about this somewhere, i’m here just telling you that the possibility / topic exists
i wrote this about it but i don’t think it’s comprehensive enough https://quila.ink/posts/ev-of-alignment-for-negative-utilitarians/