It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain.
Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.
Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?
Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.
Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
Yes.
I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs.
I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.
But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren’t known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.
So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.
Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?
Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.
Yes.
I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.
But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren’t known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.
So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.