Personally? I guess I would say that I mostly (98%?) care about long-run power for similar values on reflection to me. And, probably some humans are quite close to my values and many are adjacent.
I’m sorry, I literally don’t understand what you’re saying here. What does “care about long-run power for similar values” mean? Do you care about maximizing your own power?
As in, I care about the long-run power of values-which-are-similar-to-my-values-on-reflection. Which includes me (on reflection) by definition, but I think probably also includes lots of other humans.
Personally? I guess I would say that I mostly (98%?) care about long-run power for similar values on reflection to me. And, probably some humans are quite close to my values and many are adjacent.
I’m sorry, I literally don’t understand what you’re saying here. What does “care about long-run power for similar values” mean? Do you care about maximizing your own power?
As in, I care about the long-run power of values-which-are-similar-to-my-values-on-reflection. Which includes me (on reflection) by definition, but I think probably also includes lots of other humans.
Values are moral statements about right and wrong. How do values have power?
In the context of optimization, values are anything you want (whether moral in nature or otherwise).
Any time a decision is made based on some value, you can view that value as having exercised power by controlling the outcome of that decision.
Or put more simply, the way that values have power, is that values have people who have power.