You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.
This should answer most of the questions above. Yes, the universe is terrible. It would be much better if the universe were optimized for my freedom.
All values are fungible. The exchange rate is not easily inspected, and thought experiments are probably no good for figuring them out.
You’re right, this does answer most of my questions. I had made incorrect assumptions about what you would consider optimal.
After updates based on this, it now appears much more likely for me that you use terminal valuation of your freedom node such that it gets triggered by more rational algorithms that really do attempt to detect restrictions and constraints in more than mere feeling-of-control manner. Is this closer to how you would describe your value?
I’m still having trouble with the idea of considering a universe optimized for one’s own personal freedom as a best thing (I tend to by default think of how to optimize for collective sum utilities of sets of minds, rather than one). It is not what I expected.