Reading your thoughts on sublinearity, I instinctively feel as a CEV utilitarian, instead of feeling that copies matter less and less terminally. It seems to me that caring less about copies amounts to hardcoding curiosity in the utility as a terminal goal, while I expect curiosity to be CEV instrumental (curiosity as a property of the superintelligence; not in the sense of keeping around intrinsically curious entities if that’s what we want the superintelligence to value).
I think that in a universe large enough w.r.t. the rate of discount over similarity of computations of your utility, and for certain conditions about how you value size and complexity of the computations, devaluing copies would lead to the garden of God in Unsong: you start tiling the universe with maximally viable satisfactory computations; when you have filled the space of such computations given your “lattice spacing” of what counts as different, you go on creating less satisfactory computations, and so on; if you are not an utilitarian, you stop when you reach your arbitrary level of what counts as “neutral utility”; and, depending on the parameters, you either calculated this to happen at the end of the universe, or you go on repeating your whole construction until the universe is filled with copies of your hierarchy of entities. While instead a utilitarian (that values complexity low enough to not fill the whole universe with a single entity) would tile the universe with the optimal entities, with differences only insofar as its utility says to make these entities value differences and be satisfied.
What do you think about this? Do you see it as a problem? Do you think it’s too unlikely to matter due to purely combinatorial reasons? Or else?
Personally I like Unsong’s God, and I think His approach is better than tiling the Universe with copies of the same optimal entity (or copies of an optimal neighborhood where each being can encounter enough diversity to satisfy them in their own neighborhood).
The Unsong approach might still lead to uncomfortable outcomes with some people tortured to make other people have different positive experiences than the ones already tried (hence the solution to the Problem of evil in Unsong), but I think that with giving big enough negative utilities to suffering, the system probably wouldn’t create people with overall very net-negative lives (and maybe put suffering p-zombie robots in the world if that’s really necessary for other people having novel positive experiences). These are just my guesses and I’m not confident that we can actually make this right, as I mentioned, I wouldn’t want to create any kind of utilitarian sovereign superintelligence. But I think that the weird asymmetry baked in infra-Bayesianism that it can’t give negative utility to any event makes the whole problem significantly harder and points at a weakness of IB.
Reading your thoughts on sublinearity, I instinctively feel as a CEV utilitarian, instead of feeling that copies matter less and less terminally. It seems to me that caring less about copies amounts to hardcoding curiosity in the utility as a terminal goal, while I expect curiosity to be CEV instrumental (curiosity as a property of the superintelligence; not in the sense of keeping around intrinsically curious entities if that’s what we want the superintelligence to value).
I think that in a universe large enough w.r.t. the rate of discount over similarity of computations of your utility, and for certain conditions about how you value size and complexity of the computations, devaluing copies would lead to the garden of God in Unsong: you start tiling the universe with maximally viable satisfactory computations; when you have filled the space of such computations given your “lattice spacing” of what counts as different, you go on creating less satisfactory computations, and so on; if you are not an utilitarian, you stop when you reach your arbitrary level of what counts as “neutral utility”; and, depending on the parameters, you either calculated this to happen at the end of the universe, or you go on repeating your whole construction until the universe is filled with copies of your hierarchy of entities. While instead a utilitarian (that values complexity low enough to not fill the whole universe with a single entity) would tile the universe with the optimal entities, with differences only insofar as its utility says to make these entities value differences and be satisfied.
What do you think about this? Do you see it as a problem? Do you think it’s too unlikely to matter due to purely combinatorial reasons? Or else?
Personally I like Unsong’s God, and I think His approach is better than tiling the Universe with copies of the same optimal entity (or copies of an optimal neighborhood where each being can encounter enough diversity to satisfy them in their own neighborhood).
The Unsong approach might still lead to uncomfortable outcomes with some people tortured to make other people have different positive experiences than the ones already tried (hence the solution to the Problem of evil in Unsong), but I think that with giving big enough negative utilities to suffering, the system probably wouldn’t create people with overall very net-negative lives (and maybe put suffering p-zombie robots in the world if that’s really necessary for other people having novel positive experiences). These are just my guesses and I’m not confident that we can actually make this right, as I mentioned, I wouldn’t want to create any kind of utilitarian sovereign superintelligence. But I think that the weird asymmetry baked in infra-Bayesianism that it can’t give negative utility to any event makes the whole problem significantly harder and points at a weakness of IB.