I read Lifespan Dilemma (which I had not before, thank you for pointing it out [and writing it]). I torture on specks vs. torture without so much as a second thought, but don’t give the Omega a penny for life extension for this reason:
Following a super-exponential utility function requires super-polynomial amounts of memory. I just don’t know what it would mean to have more experiences than can be counted using all the matter in the universe. Existing outside of PSPACE is so...alien.
If I wanted to keep count of how many experiences I’ve had, it would be morally impermissible to simply increment natural numbers. Some natural numbers encode universes of unspeakable torment, and I’d hate for my IP register to accidentally create a hell dimension.
Is it really so disheartening to believe that the marginal utility I’d place on the Nth experience* would eventually decrease to 0?
*Where N is a number that can encode a universe-timeline unfathomably larger than ours.
ETA:
You’re not complaining about VNM per se, you’re complaining that you think your preferences correspond to a bounded utility function.
Yes, I agree. I have no grief with the VNM axioms. I think that agents should use only a polynomial amount of memory, and therefore desire that they follow utility functions that grow at most exponentially against the complexity of outcomes.
I read Lifespan Dilemma (which I had not before, thank you for pointing it out [and writing it]). I torture on specks vs. torture without so much as a second thought, but don’t give the Omega a penny for life extension for this reason:
Following a super-exponential utility function requires super-polynomial amounts of memory. I just don’t know what it would mean to have more experiences than can be counted using all the matter in the universe. Existing outside of PSPACE is so...alien.
If I wanted to keep count of how many experiences I’ve had, it would be morally impermissible to simply increment natural numbers. Some natural numbers encode universes of unspeakable torment, and I’d hate for my IP register to accidentally create a hell dimension.
Is it really so disheartening to believe that the marginal utility I’d place on the Nth experience* would eventually decrease to 0?
*Where N is a number that can encode a universe-timeline unfathomably larger than ours.
ETA:
Yes, I agree. I have no grief with the VNM axioms. I think that agents should use only a polynomial amount of memory, and therefore desire that they follow utility functions that grow at most exponentially against the complexity of outcomes.
“A polynomial amount of memory ought to be enough for anyone.”—solipsist