I claim that you can, in fact, get more utilons that way. For now.
This is based on hearing of various experiences of people trying to do the naive Level 2 Robutil move of “try to optimize their leisure/etc to get more utilons”, and then finding themselves weirdly fucked up. The claim is that the move “actually, just optimize some things for yourself” works overall better than the move of “try to explicitly evaluate everything in utilons.”
But, it does seem true/important that this is a temporary state of affairs. A fully informed utilitarian with the Utility Textbook From the Future could optimize their leisure/well-being fully for utility, and the tails would come apart and they would do different things than a fully informed “humanist”. The claim here is that a bounded-utilitarian who knows they are bounded and info-limited can eventually recognize they are not yet close enough to a fully fledged theory to try to use the math directly. (This sort of pattern seems common for various “turn things into math” projects).
I agree this is important enough to be part of the OP though, and that the current phrasing is misleading.
It took Robutil longer still to consider that perhaps humans not only need to prioritize their own wellbeing and friendships, but to prioritize them for their own sake?
I claim that you can, in fact, get more utilons that way. For now.
This is based on hearing of various experiences of people trying to do the naive Level 2 Robutil move of “try to optimize their leisure/etc to get more utilons”, and then finding themselves weirdly fucked up. The claim is that the move “actually, just optimize some things for yourself” works overall better than the move of “try to explicitly evaluate everything in utilons.”
But, it does seem true/important that this is a temporary state of affairs. A fully informed utilitarian with the Utility Textbook From the Future could optimize their leisure/well-being fully for utility, and the tails would come apart and they would do different things than a fully informed “humanist”. The claim here is that a bounded-utilitarian who knows they are bounded and info-limited can eventually recognize they are not yet close enough to a fully fledged theory to try to use the math directly. (This sort of pattern seems common for various “turn things into math” projects).
I agree this is important enough to be part of the OP though, and that the current phrasing is misleading.
It took Robutil longer still to consider that perhaps humans not only need to prioritize their own wellbeing and friendships, but to prioritize them for their own sake?
Oh for sure. When I said “you”, I, uh, was in fact assuming the reader was a human.
I am not under the illusion that self-actualizing AgentGPT who is reading this essay needs to prioritize its wellbeing and friendships.
(but, edited)