This does help, thank you. I’d come to similar judgments and maybe couldn’t sustain them long because I didn’t know of anyone else with them.
I think this also happens to help me ask my question better. What I’d also like to know:
What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it “gaming” in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy?
Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn’t probably fix. I just truly don’t know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don’t unnecessarily choose some trajectory in exceeded vain.
I guess, above, what I was trying to communicate—if there’s something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn’t be prone to be fundamentally derived from any kind of idolatry.)
I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you’ve “made it”; there are always smarter people to make you feel dumb. So at any level, you’d better get used to asking stupid questions.
And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be valuable, satisfying, or something to particularly look forward to. Also why I’m asking.
I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions. I’m not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.
I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.
So at any level, you’d better get used to asking stupid questions.
It’s probably just me but the Stack Exchange community seems to make this hard.
I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions.
Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely.
I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain.
Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.
Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?
Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.
Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
Yes.
I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs.
I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.
But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren’t known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.
So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.
This does help, thank you. I’d come to similar judgments and maybe couldn’t sustain them long because I didn’t know of anyone else with them.
I think this also happens to help me ask my question better. What I’d also like to know:
What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it “gaming” in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy?
Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn’t probably fix. I just truly don’t know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don’t unnecessarily choose some trajectory in exceeded vain.
I guess, above, what I was trying to communicate—if there’s something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn’t be prone to be fundamentally derived from any kind of idolatry.)
I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you’ve “made it”; there are always smarter people to make you feel dumb. So at any level, you’d better get used to asking stupid questions.
I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions. I’m not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.
I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.
It’s probably just me but the Stack Exchange community seems to make this hard.
Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.
Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?
Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.
Yes.
I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.
But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren’t known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.
So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.