Customer: I’m pretty sure the marginal utility of fiction diminishes once a significant portion of my life is taken up by fiction.
Then that is also the solution to infinite ethics, that we should be scope insensitive to even larger amounts of the same if we already devote a significant portion of our life’s to it? And what do you mean by ‘diminishes’, are you saying that we should apply discounting?
I don’t know. The utility function measures outputs rather than inputs; the fiction case is confusing because the two are closely correlated (i.e. how much awesome fiction I consume is correlated with how much time I spend consuming awesome fiction).
For your solution to make sense, we’d need some definition of “time devoted to a particular cause” that we can then manage in our utility function. For example, if parts of your brain are contemplating some ethical problem while you’re busy doing something else, does that count as time devoted?
It seems doable though. I don’t think it’s the solution to infinite ethics but it seems like you could conceive of an agent behaving that way while still being considered rational and altruistic.
If you can increase the intensity of the awesomeness of the fiction, without increasing the duration I spend there, I certainly have no objections. Similarly, if you can give an awesomizing overlay to my productive activity, without interfering with that productivity, then again I have no objections.
My objection to the simulator is that it takes away from my productive work. It’s not that I stop caring about fiction, it’s that I keep caring about reality.
Even if I accept that living in the simulator is genuinely good and worthwhile… what am I doing sitting around in the sim when I could be out there getting everyone else to sign up? Actually using the simulator creates only one person-hour of sim-time per hour; surely I can get better leverage than that through a little well-placed evangelism.
Then that is also the solution to infinite ethics, that we should be scope insensitive to even larger amounts of the same if we already devote a significant portion of our life’s to it? And what do you mean by ‘diminishes’, are you saying that we should apply discounting?
I don’t know. The utility function measures outputs rather than inputs; the fiction case is confusing because the two are closely correlated (i.e. how much awesome fiction I consume is correlated with how much time I spend consuming awesome fiction).
For your solution to make sense, we’d need some definition of “time devoted to a particular cause” that we can then manage in our utility function. For example, if parts of your brain are contemplating some ethical problem while you’re busy doing something else, does that count as time devoted?
It seems doable though. I don’t think it’s the solution to infinite ethics but it seems like you could conceive of an agent behaving that way while still being considered rational and altruistic.
If you can increase the intensity of the awesomeness of the fiction, without increasing the duration I spend there, I certainly have no objections. Similarly, if you can give an awesomizing overlay to my productive activity, without interfering with that productivity, then again I have no objections.
My objection to the simulator is that it takes away from my productive work. It’s not that I stop caring about fiction, it’s that I keep caring about reality.
Even if I accept that living in the simulator is genuinely good and worthwhile… what am I doing sitting around in the sim when I could be out there getting everyone else to sign up? Actually using the simulator creates only one person-hour of sim-time per hour; surely I can get better leverage than that through a little well-placed evangelism.