I’d appreciate seeing the post that you mentioned, and part of me does worry that you are right.
Part of me worries that this is all just a form of group mental illness. That I am have been sucked into a group that was brought together through a pathological obsession with groundless abstract prediction and a sad-childhood-memories-induced intuition that narratives about the safety of powerful actors are usually untrustworthy. That fears about AI are an extreme shadow of these underlying group beliefs and values. That we are just endlessly group-reinforcing our mental-ill-health-backed doomy predictions about future powerful entities. I put weight on this part of me having some or all of the truth.
But I have other parts that tell me that these ideas just all make sense. In fact, the more grounded, calm and in touch with my thoughts and feelings I am—the more I think/feel that acknowledging AI risk is the healthiest thing that I do.
In mental health circles, the general guiding principle as for whether a patient needs treatment for their mental health is whether the train of thought is interfering with their enjoyment of life.
Do you enjoy thinking about these topics and discussing them?
If you don’t—if it just stresses you out and makes the light of life shine less bright, then it’s not a bad idea to step away from it or take a break. Even if AI is going to destroy the world, that day isn’t today and arguably the threat of that looming over you sooner than a natural demise increases the value of the days you have that are good. Don’t squander a limited resource.
But if you enjoy the discussions and the debates, if you find the topic stimulating and the problem space interesting—you’re going to whittle your days away doing something no matter how you spend your time. It might as well be working on something fun that you believe in and feel may make a difference to the world. Even if your worries are overblown, time spent on something you enjoy with people you respect isn’t time wasted.
Health is a spectrum and too much of a good thing isn’t good at all. But only you can decide what’s too much and what’s the right amount. So if you feel it’s too much, you can scale it back. And if you feel it’s working out well for you, more power to you—the sense of feeling in the right place at the right time (even if under perceived dire circumstances) is a bit of a rarity in the human experience.
In general—enjoy life while it lasts. No matter your objective p(doom), your relative p(doom) is 100%. Make the most of the time you have.
This is good advice, but you must recognize that it’s also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There’s been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn’t exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
I’d appreciate seeing the post that you mentioned, and part of me does worry that you are right.
Part of me worries that this is all just a form of group mental illness. That I am have been sucked into a group that was brought together through a pathological obsession with groundless abstract prediction and a sad-childhood-memories-induced intuition that narratives about the safety of powerful actors are usually untrustworthy. That fears about AI are an extreme shadow of these underlying group beliefs and values. That we are just endlessly group-reinforcing our mental-ill-health-backed doomy predictions about future powerful entities. I put weight on this part of me having some or all of the truth.
But I have other parts that tell me that these ideas just all make sense. In fact, the more grounded, calm and in touch with my thoughts and feelings I am—the more I think/feel that acknowledging AI risk is the healthiest thing that I do.
In mental health circles, the general guiding principle as for whether a patient needs treatment for their mental health is whether the train of thought is interfering with their enjoyment of life.
Do you enjoy thinking about these topics and discussing them?
If you don’t—if it just stresses you out and makes the light of life shine less bright, then it’s not a bad idea to step away from it or take a break. Even if AI is going to destroy the world, that day isn’t today and arguably the threat of that looming over you sooner than a natural demise increases the value of the days you have that are good. Don’t squander a limited resource.
But if you enjoy the discussions and the debates, if you find the topic stimulating and the problem space interesting—you’re going to whittle your days away doing something no matter how you spend your time. It might as well be working on something fun that you believe in and feel may make a difference to the world. Even if your worries are overblown, time spent on something you enjoy with people you respect isn’t time wasted.
Health is a spectrum and too much of a good thing isn’t good at all. But only you can decide what’s too much and what’s the right amount. So if you feel it’s too much, you can scale it back. And if you feel it’s working out well for you, more power to you—the sense of feeling in the right place at the right time (even if under perceived dire circumstances) is a bit of a rarity in the human experience.
In general—enjoy life while it lasts. No matter your objective p(doom), your relative p(doom) is 100%. Make the most of the time you have.
This is good advice, but you must recognize that it’s also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There’s been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn’t exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
And slightly reduce our odds of a grand future...