You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.