This is a bit tangential, and a bit ranty, maybe a bit out of line, but it might help [a bit]...
From one self-hater to another: I’ve always been negative. I’ve always disliked myself, my past decisions, the world around me, and the decisions made therein. Here’s the kind of philosophy I’ve embraced over the past few years:
My pessimism motivates me something like the way nihilism motivates Nietzsche. It is the ultimate freedom. I’m not weighed down by this oppressive sense that I’m missing some great opportunity or taking an otherwise good life and shitting on it. Why? Because I suck, the human condition sucks, and life sucks—so I might as well fucking do whatever I’ve got to do to get to wherever I want to go. I’m probably not going to get there, but I’ll be damned if I don’t die trying.
I’ve tried a lot of different things to try to absolve myself of this kind of inherent, long-standing negativity, but it’s the wrong way to go about it. This is the way I am, and I’m pretty ok with that. I feel like when I embraced this, it was cathartic, a little like someone discovering a repressed memory. I’ve come out of the pessemist’s closet. ;)
Things like this journaling method are good—it’s good to be explicit about what you’re thankful for, it’s good to act in ways that maximize your ability to do things, and maybe, after some time, that negativity will go away (or, at least, the negative part of said negativity). But, you don’t need self-esteem, and motivation is a farce; it’s what people sit around waiting for some translucent muse to inspire them, telling themselves they need “motivation” to do what they’ve got to do. The thing to be weary of is not turning your negativity into a force that oppresses you.
I think I’m on the track to doing important things (relatively “late” in life [compared to my peers], but w/e), and here’s how I see myself: Like Arnold Schwarzenegger in the end of Terminator 2, except instead of lava, it’s sewage—and instead of a thumbs-up, it’s my middle finger.
This does help, thank you. I’d come to similar judgments and maybe couldn’t sustain them long because I didn’t know of anyone else with them.
I think this also happens to help me ask my question better. What I’d also like to know:
What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it “gaming” in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy?
Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn’t probably fix. I just truly don’t know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don’t unnecessarily choose some trajectory in exceeded vain.
I guess, above, what I was trying to communicate—if there’s something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn’t be prone to be fundamentally derived from any kind of idolatry.)
I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you’ve “made it”; there are always smarter people to make you feel dumb. So at any level, you’d better get used to asking stupid questions.
And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be valuable, satisfying, or something to particularly look forward to. Also why I’m asking.
I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions. I’m not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.
I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.
So at any level, you’d better get used to asking stupid questions.
It’s probably just me but the Stack Exchange community seems to make this hard.
I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions.
Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely.
I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain.
Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.
Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?
Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.
Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
Yes.
I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs.
I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.
But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren’t known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.
So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.
This is a bit tangential, and a bit ranty, maybe a bit out of line, but it might help [a bit]...
From one self-hater to another: I’ve always been negative. I’ve always disliked myself, my past decisions, the world around me, and the decisions made therein. Here’s the kind of philosophy I’ve embraced over the past few years:
My pessimism motivates me something like the way nihilism motivates Nietzsche. It is the ultimate freedom. I’m not weighed down by this oppressive sense that I’m missing some great opportunity or taking an otherwise good life and shitting on it. Why? Because I suck, the human condition sucks, and life sucks—so I might as well fucking do whatever I’ve got to do to get to wherever I want to go. I’m probably not going to get there, but I’ll be damned if I don’t die trying.
I’ve tried a lot of different things to try to absolve myself of this kind of inherent, long-standing negativity, but it’s the wrong way to go about it. This is the way I am, and I’m pretty ok with that. I feel like when I embraced this, it was cathartic, a little like someone discovering a repressed memory. I’ve come out of the pessemist’s closet. ;)
Things like this journaling method are good—it’s good to be explicit about what you’re thankful for, it’s good to act in ways that maximize your ability to do things, and maybe, after some time, that negativity will go away (or, at least, the negative part of said negativity). But, you don’t need self-esteem, and motivation is a farce; it’s what people sit around waiting for some translucent muse to inspire them, telling themselves they need “motivation” to do what they’ve got to do. The thing to be weary of is not turning your negativity into a force that oppresses you.
I think I’m on the track to doing important things (relatively “late” in life [compared to my peers], but w/e), and here’s how I see myself: Like Arnold Schwarzenegger in the end of Terminator 2, except instead of lava, it’s sewage—and instead of a thumbs-up, it’s my middle finger.
This does help, thank you. I’d come to similar judgments and maybe couldn’t sustain them long because I didn’t know of anyone else with them.
I think this also happens to help me ask my question better. What I’d also like to know:
What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it “gaming” in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy?
Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn’t probably fix. I just truly don’t know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don’t unnecessarily choose some trajectory in exceeded vain.
I guess, above, what I was trying to communicate—if there’s something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn’t be prone to be fundamentally derived from any kind of idolatry.)
I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you’ve “made it”; there are always smarter people to make you feel dumb. So at any level, you’d better get used to asking stupid questions.
I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions. I’m not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.
I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.
It’s probably just me but the Stack Exchange community seems to make this hard.
Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely.
It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.
Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.
Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.
Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?
Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.
Yes.
I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.
But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren’t known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.
So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.