As often happens, it is to quite an extent a matter of definitions. If by an “end” you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn’t be terminal. This is essentially the same as the choice of reasoning priors, in that anything that can be chosen is, by definition, not a prior, but a posterior of the choice process.
Obviously, if you split the reasoning process into sections, then posteriors of a certain sections can become priors of the sections following. Likewise, certain means can be more efficiently thought as ends, and in this case rationality can help you determine what those ends would be.
The problem with humans is that the evolved brain cannot directly access either core priors or terminal values, and there is not guarantee that they are even coherent enough to be said to properly exists. So every “end” that rises high enough into the conscious mind to be properly reified is necessarily an extrapolation, and hence not a truly terminal end.
If by an “end” you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn’t be terminal.
A notion of “terminal value” should allow possibility of error in following it, including particularly bad errors that cause value drift (change which terminal values an agent follows).
As often happens, it is to quite an extent a matter of definitions. If by an “end” you mean a terminal value, then no purely internal process can change that value, because otherwise it wouldn’t be terminal. This is essentially the same as the choice of reasoning priors, in that anything that can be chosen is, by definition, not a prior, but a posterior of the choice process.
Obviously, if you split the reasoning process into sections, then posteriors of a certain sections can become priors of the sections following. Likewise, certain means can be more efficiently thought as ends, and in this case rationality can help you determine what those ends would be.
The problem with humans is that the evolved brain cannot directly access either core priors or terminal values, and there is not guarantee that they are even coherent enough to be said to properly exists. So every “end” that rises high enough into the conscious mind to be properly reified is necessarily an extrapolation, and hence not a truly terminal end.
A notion of “terminal value” should allow possibility of error in following it, including particularly bad errors that cause value drift (change which terminal values an agent follows).
Some of your terminal values can modify other terminal values though. Rational investigation can inform you about optimal trade-offs between them.
Edit: Tradeoffs don’t change that you want more of both A and B. Retracted.