We don’t die in literally all branches of the future, and I want to make my future selves proud even if they’re a tiny slice of probability.
I am a mind with many shards of value, many of which can take joy from small and local things, like friends and music. Expecting that the end is coming changes longer term plans, but not the local actions and joys too much.
“There’s another interpretation of this, which I think might be better where you can model people like AI_WAIFU as modeling timelines where we don’t win with literally zero value. That there is zero value whatsoever in timelines where we don’t win. AndEliezer, or people like me, are saying, ‘Actually, we should value them in proportion to how close to winning we got’. Because that is more healthy… It’s reward shaping! We should give ourselves partial reward for getting partially the way. He says that in the post, how we should give ourselves dignity points in proportion to how close we get.
And this is, in my opinion, a much psychologically healthier way to actually deal with the problem. This is how I reason about the problem. I expect to die. I expect this not to work out. But hell, I’m going to give it a good shot and I’m going to have a great time along the way. I’m going to spend time with great people. I’m going to spend time with my friends. We’re going to work on some really great problems. And if it doesn’t work out, it doesn’t work out. But hell, we’re going to die with some dignity. We’re going to go down swinging.”
Even if the world is probably doomed, there’s a certain sense of—I don’t call it joy, more like a mix of satisfaction and determination, that comes from fighting until the very end with everything we have. Poets and writers have argued since the dawn of literature that the pursuit of a noble cause is more meaningful than any happiness we can achieve, and while I strongly disagree with the idea of judging cool experiences by meaning, it does say a lot about how historically strong the vein of humans deriving value from working for what you believe in is.
In terms of happiness itself—I practically grew up a transhumanist, and my ideals of joy are bounded pretty damn high. Deviating from that definitely took a lot of getting used to, and I don’t claim to be done in that regard either. I remember someone once saying that six months after it really “hit” them for the first time, their hedonic baseline had oscillated back to being pretty normal. I don’t know if that’ll be the same for everyone, and maybe it shouldn’t be—the end of the world and everything we love shouldn’t be something we learn to be okay with. But we fight against it happening, so maybe between all the fire and brimstone, we’re able to just enjoy living while we are. It isn’t constant, but you do learn to live in the moment, in time. The world is a pretty amazing place, filled with some of the coolest sentient beings in history.
I am not an AI researcher, but it seems analogous to the acceptance of mortality for most people. Throughout history, almost everyone has had to live with the knowledge that they will inevitably die, perhaps suddenly. Many methods of coping have been utilized, but at the end of the day it seems like something that human psychology is just… equipped to handle. x-risk is much worse than personal mortality, but you know, failure to multiply and all that.
Idk I’m a doomer and I haven’t been able to handle it well at all. If I were told “You have cancer, you’re expected to live 5-10 more years”, I’d at least have a few comforts
I’d know that I would be missed, by my family at least, for a few years.
I’d know that, to some extent, my “work would live on” in the form of good deeds I’ve done, people I’ve impacted through effective altruism.
I’d have the comfort of knowing that even I’d been dead for centuries I could still “live on” in the sense that other humans (and indeed, many nonhuman) would share brain design with me, and have drives for food, companionship, empathy, curiosity ect. A super AI by contrast, is just so alien and cold that I can’t consider it my brainspace cousin.
If I were to share my cancer diagnosis with normies, I would get sympathy. But there are very few “safe spaces” where I can share my fear of UFAI risk without getting looked at funny.
The closest community I’ve found are the environmentalist doomers, and although I don’t actually think the environment is close to collapse, I do find it somewhat cathartic to read other people’s accounts of being sad that world is going to die.
I wrote a relevant comment elsewhere. Basically I think that finding joy in the face of AGI catastrophe is roughly the same task as finding joy in the face of any catastrophe. The same human brain circuitry is at work in both cases.
In turn, I think that finding joy in the face of a catastrophe is not much different than finding joy in “normal” circumstances, so my advice would be standard stuff like aerobic exercise, healthy food, sufficient sleep, supportive relationships, gratitude journaling, meditation, etc (of course, this advice is not at all tailored to you and could be bad advice for you).
Also, AGI catastrophe is very much impending. It’s not here yet. In the present moment, it doesn’t exist. It might be worth reflecting on the utility of worry and how much you’d like to worry each day.
Firstly, it’s essential to remember that you can’t control the situation; you can only control your reaction to it. By focusing on the elements you can influence and accepting the uncertainty of the future, it becomes easier to manage the anxiety that may arise from contemplating potentially catastrophic outcomes. This mindset allows AGI safety researchers to maintain a sense of purpose and motivation in their work, as they strive to make a positive difference in the world.
Another way to find joy in this field is by embracing the creative aspects of exploring AI safety concerns. There are many great examples of fiction based on problems with AI safety. E.g.
“Runaround” by Isaac Asimov (1942)
“The Lifecycle of Software Objects” by Ted Chiang (2010)
“Cat Pictures Please” by Naomi Kritzer (2015)
The Matrix (1999) - Directed by the Wachowskis
The Terminator (1984) - Directed by James Cameron
2001: A Space Odyssey (1968) - Directed by Stanley Kubrick
Blade Runner (1982) - Directed by Ridley Scott
etc, etc.
Engaging in creative storytelling not only provides a sense of enjoyment, but it can also help to spread awareness about AI safety issues and inspire others to take action.
In summary, finding joy in the world of AGI safety research and enthusiasm involves accepting what you can and cannot control, and embracing the creative aspects of exploring potential AI safety concerns. By focusing on making a positive impact and engaging in imaginative storytelling, individuals in this field can maintain a sense of fulfillment and joy in their work, even when faced with the possibility of a seemingly doomed future.
How do doomy AGI safety researchers and enthusiasts find joy while always maintaining the framing that the world is probably doomed?
We don’t die in literally all branches of the future, and I want to make my future selves proud even if they’re a tiny slice of probability.
I am a mind with many shards of value, many of which can take joy from small and local things, like friends and music. Expecting that the end is coming changes longer term plans, but not the local actions and joys too much.
Use the dignity heuristic as reward shaping
Even if the world is probably doomed, there’s a certain sense of—I don’t call it joy, more like a mix of satisfaction and determination, that comes from fighting until the very end with everything we have. Poets and writers have argued since the dawn of literature that the pursuit of a noble cause is more meaningful than any happiness we can achieve, and while I strongly disagree with the idea of judging cool experiences by meaning, it does say a lot about how historically strong the vein of humans deriving value from working for what you believe in is.
In terms of happiness itself—I practically grew up a transhumanist, and my ideals of joy are bounded pretty damn high. Deviating from that definitely took a lot of getting used to, and I don’t claim to be done in that regard either. I remember someone once saying that six months after it really “hit” them for the first time, their hedonic baseline had oscillated back to being pretty normal. I don’t know if that’ll be the same for everyone, and maybe it shouldn’t be—the end of the world and everything we love shouldn’t be something we learn to be okay with. But we fight against it happening, so maybe between all the fire and brimstone, we’re able to just enjoy living while we are. It isn’t constant, but you do learn to live in the moment, in time. The world is a pretty amazing place, filled with some of the coolest sentient beings in history.
I am not an AI researcher, but it seems analogous to the acceptance of mortality for most people. Throughout history, almost everyone has had to live with the knowledge that they will inevitably die, perhaps suddenly. Many methods of coping have been utilized, but at the end of the day it seems like something that human psychology is just… equipped to handle. x-risk is much worse than personal mortality, but you know, failure to multiply and all that.
Idk I’m a doomer and I haven’t been able to handle it well at all. If I were told “You have cancer, you’re expected to live 5-10 more years”, I’d at least have a few comforts
I’d know that I would be missed, by my family at least, for a few years.
I’d know that, to some extent, my “work would live on” in the form of good deeds I’ve done, people I’ve impacted through effective altruism.
I’d have the comfort of knowing that even I’d been dead for centuries I could still “live on” in the sense that other humans (and indeed, many nonhuman) would share brain design with me, and have drives for food, companionship, empathy, curiosity ect. A super AI by contrast, is just so alien and cold that I can’t consider it my brainspace cousin.
If I were to share my cancer diagnosis with normies, I would get sympathy. But there are very few “safe spaces” where I can share my fear of UFAI risk without getting looked at funny.
The closest community I’ve found are the environmentalist doomers, and although I don’t actually think the environment is close to collapse, I do find it somewhat cathartic to read other people’s accounts of being sad that world is going to die.
Living in the moment helps. There’s joy and beauty and life right here, right now, and that’s worth enjoying.
I wrote a relevant comment elsewhere. Basically I think that finding joy in the face of AGI catastrophe is roughly the same task as finding joy in the face of any catastrophe. The same human brain circuitry is at work in both cases.
In turn, I think that finding joy in the face of a catastrophe is not much different than finding joy in “normal” circumstances, so my advice would be standard stuff like aerobic exercise, healthy food, sufficient sleep, supportive relationships, gratitude journaling, meditation, etc (of course, this advice is not at all tailored to you and could be bad advice for you).
Also, AGI catastrophe is very much impending. It’s not here yet. In the present moment, it doesn’t exist. It might be worth reflecting on the utility of worry and how much you’d like to worry each day.
Firstly, it’s essential to remember that you can’t control the situation; you can only control your reaction to it. By focusing on the elements you can influence and accepting the uncertainty of the future, it becomes easier to manage the anxiety that may arise from contemplating potentially catastrophic outcomes. This mindset allows AGI safety researchers to maintain a sense of purpose and motivation in their work, as they strive to make a positive difference in the world.
Another way to find joy in this field is by embracing the creative aspects of exploring AI safety concerns. There are many great examples of fiction based on problems with AI safety. E.g.
“Runaround” by Isaac Asimov (1942)
“The Lifecycle of Software Objects” by Ted Chiang (2010)
“Cat Pictures Please” by Naomi Kritzer (2015)
The Matrix (1999) - Directed by the Wachowskis
The Terminator (1984) - Directed by James Cameron
2001: A Space Odyssey (1968) - Directed by Stanley Kubrick
Blade Runner (1982) - Directed by Ridley Scott
etc, etc.
Engaging in creative storytelling not only provides a sense of enjoyment, but it can also help to spread awareness about AI safety issues and inspire others to take action.
In summary, finding joy in the world of AGI safety research and enthusiasm involves accepting what you can and cannot control, and embracing the creative aspects of exploring potential AI safety concerns. By focusing on making a positive impact and engaging in imaginative storytelling, individuals in this field can maintain a sense of fulfillment and joy in their work, even when faced with the possibility of a seemingly doomed future.