I’m having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I’d like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.
But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and maybe a few other existential risks are worth focusing on (not even things that I still consider to be enormously important relative to some others). In principle I could focus on those, as well. I’m not intelligent enough to do serious work on Friendly AI, but I probably could transition, relatively quickly, to working on machine learning and in data science, with perhaps some opportunities to contribute and likely higher earnings.
The biggest problem, however, is that whenever I seem to be on track towards doing something useful and interesting, a monumental existential confusion kicks in and my productivity plummets. This is mostly related to thinking about life and death.
EY recently suggested that we should care about solving AGI alignment because of quantum immortality (or its cousins). This is a subject that has greatly troubled me for a long time. Thinking logically, big world immortality seems like an inescapable conclusion from some fairly basic assumption. On the other hand, the whole idea feels completely absurd.
Having to take that seriously, even if I don’t believe in it 100 percent, has made it difficult for me to find joy in the things that I do. Combining big world immortality with other usual ideas regarding existential risks and so on that are prevalent in the LW memespace sort of suggests that the most likely outcome I (or anybody else) can expect in the long run is surviving indefinitely as the only remaining human, or nearly certainly as the only remaining person among those that I currently know. Probably in an increasingly bad health as well.
It doesn’t help that I’ve never been that interested in living for a very long time, like most transhumanists seem to be. Sure, I think aging and death are problems that we should eventually solve, and in principle I don’t have anything against living for a significantly longer time than the average human lifespan, but it’s not something that I’ve been very interested in actively seeking and if there’s a significant risk that those very many years would not be very comfortable, then I quickly lose interest. So the theories that sort of make this whole death business seem like an illusion are difficult to me. And overall, the idea does make the mundane things that I do now seem even more meaningless. Obviously, this is taking its toll on my relationships with other people as well.
This has also led me to approach related topics a lot less rationally than I probably should. Because of this, I think both my estimate of the severity of the UFAI problem and our ability to solve this has gone up, as has my estimate of the likelihood that we’ll be able to beat aging in my lifetime—because those are things that seem to be necessary to escape the depressing conclusions I’ve pointed out.
I’m not good enough at fooling myself, though. As I said, my ability to concentrate on doing anything useful is very weak nowadays. It actually often feels easier to do something that I know is an outright waste of time but gives something to think about, like watching YouTube, playing video games or drinking beer.
I would appreciate any input. Given how seriously people here take things like the simulation argument, the singularity or MWI, existential confusion cannot be that uncommon. How do people usually deal with this kind of stuff?
I’d suggest you prioritize your personal security. Once you have an income that doesn’t take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.
The reason I’d make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn’t work out the fallout can be considerable.
I am essentially imagining you to be similar to me about five years ago.
It sounds like you are not really excited about anything in your own life. You’re probably more excited about far-future hypotheticals than about any project or prospect in your own immediate future. This is a problem because you are a primate who is psychologically deeply predisposed to be engaged with your environment and with other primates.
I used to have similar problems of motivation and engagement with reality. At some point I just sort of became exhausted with it all and started working on “insignificant” projects like writing a book, working on an app, and raising kids. It turns out that focusing on things that are fun and engaging to work on is better for my mental health than worrying about how badly I’m failing to live up to my imagined ideal of a perfectly rational agent living in a Big World.
If I find that I’m having to argue with myself that something is useful and I should do it, then I’m fighting my brain’s deeply ingrained and fairly accurate Bullshit Detector Module. If I actually believe that a task is useful in the beliefs-as-constraints-for-anticipated-experience sense of “believe”, then I’ll just do it and not have any internal dialogue at all.
The part about not being excited about anything sounds very accurate and is certainly a part of the problem. I’ve also tried just taking up projects and focusing on them, but I should probably try harder as well.
However, a big part of the problem is that it’s not just that those things feel insignificant; it’s also that I have a vague feeling that I’m sort of putting my own well-being in jeopardy by doing that. As I said, I’m very confused about things like life, death and existence, on a personal level. How do I focus on mundane things when I’m confused about basic things such as whether I (or anyone) else should expect to eventually die or to experience a weird-ass form of subjective anthropic immortality, and about what that actually means? Should that make me act somehow?
If there is One Weird Trick that you should using right now in order to game your way around anthropics, simulationism, or deontology, you don’t know what that trick is, you won’t figure out what that trick is, and it’s somewhat likely that you can’t figure out what that trick is because if you did you would get hammered down by the acausal math/simulators/gods.
You also can’t know if you’re in a simulation, a Big quantum world, a big cosmological world, or if you’re a reincarnation. Or one or more of those at the same time. And each of those realities would imply a different thing that you should be doing to optimize your … whatever it is you should be optimizing. Which you also don’t know.
So really I just go with my gut and try to generally make decisions that I probably won’t think are stupid later given my current state of knowledge.
You also can’t know if you’re in a simulation, a Big quantum world, a big cosmological world, or if you’re a reincarnation
But you can make estimates of the probabilities (EY’s estimate of the big quantum world part, for example, is very close to 1).
So really I just go with my gut and try to generally make decisions that I probably won’t think are stupid later given my current state of knowledge.
That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some cases, the decision that would be not-stupid in a big world scenario could be the complete opposite of what would make sense in a non-big world situation.
I meant the word “stupid” to carry a connotation of “obviously bad, obviously destroying value.”
Playing with my children rather than working extra hard to earn extra money to donate to MIRI will never be “stupid” although it may be in some sense the wrong choice if I end up being eaten by an AI.
This is true for the same reasons that putting money in my 401K is obviously “not stupid”, especially relative to giving that money to my brother-in-law who claims to have developed a new formula for weatherproofing roofs. Maybe my brother-in-law become a millionaire, but I’m still not going to feel like I made a stupid decision.
You may rightly point out that I’m not being rational and/or consistent. I seem to be valuing safe, near-term bets over risky, long-term bets, regardless of what the payouts of those bets might be. Part of my initial point is that, as an ape, I pretty much have to operate that way in most situations if I want to remain sane and effective. There are some people who get through life by making cold utilitarian calculations and acting on even the most counterintuitive conclusions, but the psychological cost of behaving that way has not been worth it to me.
I would suggest to read “The subtle art of not giving a fuck”. It’s about how to properly choose our own values, how often we are distracted by bigger or impossible goals that exhaust our mental focus and only bring unhappiness, and what are actual useful tinier values that bring much more happiness. It seems to be a perfect fit for your situation. It personally saved my life, but as with anything in self-help, your mileage may vary.
Thanks for the tip. I suppose I actually used to be pretty good at not giving too many fucks. I’ve always cared about stuff like human rights or climate change or, more lately, AI risk, but I’ve never really lost much sleep over them. Basically, I think it would be nice if we solved those problems and, but the idea that humanity might go extinct in the future doesn’t cause me too much headache in itself. The trouble is, I think, that I’ve lately begun to think that I may have a personal stake in this stuff, the point illustrated by the EY post that I linked to. See also my reply to moridinamael.
I’d recommend to take up gardening, especially if you have a local community garden.
Nothing like having your hands in the earth, to ground you. You will also then be surrounded with peaceful folk, who care for each other, and the land. Not a bad group to connect with.
And you will be personally helping save the world, just by growing and planting some trees. If you do high value woods, like cherry, you will be taking CO2 permanently out of circulation, if the wood is used for making things.
Jump on a bike, and go plant some apricots along old creek beds, will help stabilize the soil, and make food for people and animals.
Even if you are living in the slums, you can go out and collect some lichen living on an old building, mix it up in a blender with whole milk, let it sit a couple days, then go spray it in the cracks in an old brick building, or the sides of old concrete walls, and it will help purify the air. If you do the same with a lichen you find growing on an old tree, and spread it to other living trees, it will fix nitrates from the air into plant usable nitrites.
Just dealing daily with living, growing things is very powerful for the psyche.
And growing things, actually producing food, and giving it away is a very powerful form of altruism.
Or you can just get a grow light, and use that to help relax.....
AI•ON is an open community dedicated to advancing Artificial Intelligence by:
Drawing attention to important yet under-appreciated research problems.
Connecting researchers and encouraging open scientific collaboration.
Providing a learning environment for students looking to gain machine learning experience.
Work that just focuses on advancing Artificial Intelligence reduces the time we have to get the alignment problem solved it might be more harmful than helpful for xrisk.
I’m having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I’d like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.
But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and maybe a few other existential risks are worth focusing on (not even things that I still consider to be enormously important relative to some others). In principle I could focus on those, as well. I’m not intelligent enough to do serious work on Friendly AI, but I probably could transition, relatively quickly, to working on machine learning and in data science, with perhaps some opportunities to contribute and likely higher earnings.
The biggest problem, however, is that whenever I seem to be on track towards doing something useful and interesting, a monumental existential confusion kicks in and my productivity plummets. This is mostly related to thinking about life and death.
EY recently suggested that we should care about solving AGI alignment because of quantum immortality (or its cousins). This is a subject that has greatly troubled me for a long time. Thinking logically, big world immortality seems like an inescapable conclusion from some fairly basic assumption. On the other hand, the whole idea feels completely absurd.
Having to take that seriously, even if I don’t believe in it 100 percent, has made it difficult for me to find joy in the things that I do. Combining big world immortality with other usual ideas regarding existential risks and so on that are prevalent in the LW memespace sort of suggests that the most likely outcome I (or anybody else) can expect in the long run is surviving indefinitely as the only remaining human, or nearly certainly as the only remaining person among those that I currently know. Probably in an increasingly bad health as well.
It doesn’t help that I’ve never been that interested in living for a very long time, like most transhumanists seem to be. Sure, I think aging and death are problems that we should eventually solve, and in principle I don’t have anything against living for a significantly longer time than the average human lifespan, but it’s not something that I’ve been very interested in actively seeking and if there’s a significant risk that those very many years would not be very comfortable, then I quickly lose interest. So the theories that sort of make this whole death business seem like an illusion are difficult to me. And overall, the idea does make the mundane things that I do now seem even more meaningless. Obviously, this is taking its toll on my relationships with other people as well.
This has also led me to approach related topics a lot less rationally than I probably should. Because of this, I think both my estimate of the severity of the UFAI problem and our ability to solve this has gone up, as has my estimate of the likelihood that we’ll be able to beat aging in my lifetime—because those are things that seem to be necessary to escape the depressing conclusions I’ve pointed out.
I’m not good enough at fooling myself, though. As I said, my ability to concentrate on doing anything useful is very weak nowadays. It actually often feels easier to do something that I know is an outright waste of time but gives something to think about, like watching YouTube, playing video games or drinking beer.
I would appreciate any input. Given how seriously people here take things like the simulation argument, the singularity or MWI, existential confusion cannot be that uncommon. How do people usually deal with this kind of stuff?
I’d suggest you prioritize your personal security. Once you have an income that doesn’t take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.
The reason I’d make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn’t work out the fallout can be considerable.
I am essentially imagining you to be similar to me about five years ago.
It sounds like you are not really excited about anything in your own life. You’re probably more excited about far-future hypotheticals than about any project or prospect in your own immediate future. This is a problem because you are a primate who is psychologically deeply predisposed to be engaged with your environment and with other primates.
I used to have similar problems of motivation and engagement with reality. At some point I just sort of became exhausted with it all and started working on “insignificant” projects like writing a book, working on an app, and raising kids. It turns out that focusing on things that are fun and engaging to work on is better for my mental health than worrying about how badly I’m failing to live up to my imagined ideal of a perfectly rational agent living in a Big World.
If I find that I’m having to argue with myself that something is useful and I should do it, then I’m fighting my brain’s deeply ingrained and fairly accurate Bullshit Detector Module. If I actually believe that a task is useful in the beliefs-as-constraints-for-anticipated-experience sense of “believe”, then I’ll just do it and not have any internal dialogue at all.
The part about not being excited about anything sounds very accurate and is certainly a part of the problem. I’ve also tried just taking up projects and focusing on them, but I should probably try harder as well.
However, a big part of the problem is that it’s not just that those things feel insignificant; it’s also that I have a vague feeling that I’m sort of putting my own well-being in jeopardy by doing that. As I said, I’m very confused about things like life, death and existence, on a personal level. How do I focus on mundane things when I’m confused about basic things such as whether I (or anyone) else should expect to eventually die or to experience a weird-ass form of subjective anthropic immortality, and about what that actually means? Should that make me act somehow?
If there is One Weird Trick that you should using right now in order to game your way around anthropics, simulationism, or deontology, you don’t know what that trick is, you won’t figure out what that trick is, and it’s somewhat likely that you can’t figure out what that trick is because if you did you would get hammered down by the acausal math/simulators/gods.
You also can’t know if you’re in a simulation, a Big quantum world, a big cosmological world, or if you’re a reincarnation. Or one or more of those at the same time. And each of those realities would imply a different thing that you should be doing to optimize your … whatever it is you should be optimizing. Which you also don’t know.
So really I just go with my gut and try to generally make decisions that I probably won’t think are stupid later given my current state of knowledge.
But you can make estimates of the probabilities (EY’s estimate of the big quantum world part, for example, is very close to 1).
That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some cases, the decision that would be not-stupid in a big world scenario could be the complete opposite of what would make sense in a non-big world situation.
I meant the word “stupid” to carry a connotation of “obviously bad, obviously destroying value.”
Playing with my children rather than working extra hard to earn extra money to donate to MIRI will never be “stupid” although it may be in some sense the wrong choice if I end up being eaten by an AI.
This is true for the same reasons that putting money in my 401K is obviously “not stupid”, especially relative to giving that money to my brother-in-law who claims to have developed a new formula for weatherproofing roofs. Maybe my brother-in-law become a millionaire, but I’m still not going to feel like I made a stupid decision.
You may rightly point out that I’m not being rational and/or consistent. I seem to be valuing safe, near-term bets over risky, long-term bets, regardless of what the payouts of those bets might be. Part of my initial point is that, as an ape, I pretty much have to operate that way in most situations if I want to remain sane and effective. There are some people who get through life by making cold utilitarian calculations and acting on even the most counterintuitive conclusions, but the psychological cost of behaving that way has not been worth it to me.
I would suggest to read “The subtle art of not giving a fuck”. It’s about how to properly choose our own values, how often we are distracted by bigger or impossible goals that exhaust our mental focus and only bring unhappiness, and what are actual useful tinier values that bring much more happiness.
It seems to be a perfect fit for your situation. It personally saved my life, but as with anything in self-help, your mileage may vary.
Thanks for the tip. I suppose I actually used to be pretty good at not giving too many fucks. I’ve always cared about stuff like human rights or climate change or, more lately, AI risk, but I’ve never really lost much sleep over them. Basically, I think it would be nice if we solved those problems and, but the idea that humanity might go extinct in the future doesn’t cause me too much headache in itself. The trouble is, I think, that I’ve lately begun to think that I may have a personal stake in this stuff, the point illustrated by the EY post that I linked to. See also my reply to moridinamael.
I’d recommend to take up gardening, especially if you have a local community garden.
Nothing like having your hands in the earth, to ground you. You will also then be surrounded with peaceful folk, who care for each other, and the land. Not a bad group to connect with.
And you will be personally helping save the world, just by growing and planting some trees. If you do high value woods, like cherry, you will be taking CO2 permanently out of circulation, if the wood is used for making things. Jump on a bike, and go plant some apricots along old creek beds, will help stabilize the soil, and make food for people and animals.
Even if you are living in the slums, you can go out and collect some lichen living on an old building, mix it up in a blender with whole milk, let it sit a couple days, then go spray it in the cracks in an old brick building, or the sides of old concrete walls, and it will help purify the air. If you do the same with a lichen you find growing on an old tree, and spread it to other living trees, it will fix nitrates from the air into plant usable nitrites.
Just dealing daily with living, growing things is very powerful for the psyche.
And growing things, actually producing food, and giving it away is a very powerful form of altruism.
Or you can just get a grow light, and use that to help relax.....
AI•ON is an open community dedicated to advancing Artificial Intelligence by:
http://ai-on.org/
Work that just focuses on advancing Artificial Intelligence reduces the time we have to get the alignment problem solved it might be more harmful than helpful for xrisk.