Edit: I spent the past 20 minutes thinking about the best way to handle this type of situation. I could make a gigantic effortpost, pointing out the millions/billions of people who are currently suffering and dying. People whose lives could be improved immeasurably by AI that is being slowed down (a tiny bit) by this type of alarmism. But that would be fighting propaganda with propaganda.
I could point out the correct way of handling these types of thoughts using CBT and similar strategies. Again, it would be a huge, tremendously difficult endeavour and it would mean sacrificing my own free time to most likely get downvotes and insults in return (I know because I’ve tried enlightening people in the past elsewhere).
Ultimately, I think the correct choice for me is to avoid lesswrong and adjacent forums because this is not the first or second time I’ve seen this type of AI doomerism and I know for a fact depression is contagious.
Sure, he’s trying to cause alarm via alleged excerpts from his life. Surely society should have some way to move to a state of alarm iff that’s appropriate, do you see a better protocol than this one?
I’d appreciate seeing the post that you mentioned, and part of me does worry that you are right.
Part of me worries that this is all just a form of group mental illness. That I am have been sucked into a group that was brought together through a pathological obsession with groundless abstract prediction and a sad-childhood-memories-induced intuition that narratives about the safety of powerful actors are usually untrustworthy. That fears about AI are an extreme shadow of these underlying group beliefs and values. That we are just endlessly group-reinforcing our mental-ill-health-backed doomy predictions about future powerful entities. I put weight on this part of me having some or all of the truth.
But I have other parts that tell me that these ideas just all make sense. In fact, the more grounded, calm and in touch with my thoughts and feelings I am—the more I think/feel that acknowledging AI risk is the healthiest thing that I do.
In mental health circles, the general guiding principle as for whether a patient needs treatment for their mental health is whether the train of thought is interfering with their enjoyment of life.
Do you enjoy thinking about these topics and discussing them?
If you don’t—if it just stresses you out and makes the light of life shine less bright, then it’s not a bad idea to step away from it or take a break. Even if AI is going to destroy the world, that day isn’t today and arguably the threat of that looming over you sooner than a natural demise increases the value of the days you have that are good. Don’t squander a limited resource.
But if you enjoy the discussions and the debates, if you find the topic stimulating and the problem space interesting—you’re going to whittle your days away doing something no matter how you spend your time. It might as well be working on something fun that you believe in and feel may make a difference to the world. Even if your worries are overblown, time spent on something you enjoy with people you respect isn’t time wasted.
Health is a spectrum and too much of a good thing isn’t good at all. But only you can decide what’s too much and what’s the right amount. So if you feel it’s too much, you can scale it back. And if you feel it’s working out well for you, more power to you—the sense of feeling in the right place at the right time (even if under perceived dire circumstances) is a bit of a rarity in the human experience.
In general—enjoy life while it lasts. No matter your objective p(doom), your relative p(doom) is 100%. Make the most of the time you have.
This is good advice, but you must recognize that it’s also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There’s been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn’t exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
I can’t recall another time when someone shared their personal feelings and experiences and someone else declared it “propaganda and alarmism”. I haven’t seen “zero-risker” types do the same, but I would be curious to hear the tale and, if they share it, I don’t think anyone one will call it “propaganda and killeveryoneism”.
It’s not propaganda. OP clearly believes strongly in the sentiments discussed in the post, and its mostly a timeline of personal response to outside events than a piece meant to misinform or sway others regarding those events.
And while you do you in terms of your mental health, people who want to actually be “less wrong” in life would be wise to seek out and surround themselves by ideas different from their own.
Yes, LW has a certain broad bias, and so ironically for most people here I suspect it serves this role “less well” than it could in helping most of its users be less wrong. But particularly if you disagree with the prevailing views of the community, that makes it an excellent place to spend your time in listening, even it if can create a somewhat toxic environment for partaking in discussions and debate.
It can be a rarity to find spaces where people you disagree with take time to write out well written and clearly thought out pieces on their thoughts and perspectives. At least in my own lived experiences, many of my best insights and ideas were the result of strongly disagreeing with something I read and pursuing the train of thought resulting from that exposure.
Sycophantic agreement can give a bit of a dopamine kick, but I tend to find it next to worthless for advancing my own thinking. Give me an articulate and intelligent “no-person” any day over a “yes-person.”
Also, very few topics are actually binaries even if our brains tend towards categorizing them as such. Data doesn’t tend to truly map to only one axis, and it typically even mapped to a single axis it falls along a spectrum. It’s possible to disagree about the spectrum of a single axis of a topic while finding insight and agreement about a different axis.
Taking what works and leaving what doesn’t is probably the most useful skill one can develop in information analysis.
This is propaganda and alarmism.
Edit: I spent the past 20 minutes thinking about the best way to handle this type of situation. I could make a gigantic effortpost, pointing out the millions/billions of people who are currently suffering and dying. People whose lives could be improved immeasurably by AI that is being slowed down (a tiny bit) by this type of alarmism. But that would be fighting propaganda with propaganda.
I could point out the correct way of handling these types of thoughts using CBT and similar strategies. Again, it would be a huge, tremendously difficult endeavour and it would mean sacrificing my own free time to most likely get downvotes and insults in return (I know because I’ve tried enlightening people in the past elsewhere).
Ultimately, I think the correct choice for me is to avoid lesswrong and adjacent forums because this is not the first or second time I’ve seen this type of AI doomerism and I know for a fact depression is contagious.
Sure, he’s trying to cause alarm via alleged excerpts from his life. Surely society should have some way to move to a state of alarm iff that’s appropriate, do you see a better protocol than this one?
I’d appreciate seeing the post that you mentioned, and part of me does worry that you are right.
Part of me worries that this is all just a form of group mental illness. That I am have been sucked into a group that was brought together through a pathological obsession with groundless abstract prediction and a sad-childhood-memories-induced intuition that narratives about the safety of powerful actors are usually untrustworthy. That fears about AI are an extreme shadow of these underlying group beliefs and values. That we are just endlessly group-reinforcing our mental-ill-health-backed doomy predictions about future powerful entities. I put weight on this part of me having some or all of the truth.
But I have other parts that tell me that these ideas just all make sense. In fact, the more grounded, calm and in touch with my thoughts and feelings I am—the more I think/feel that acknowledging AI risk is the healthiest thing that I do.
In mental health circles, the general guiding principle as for whether a patient needs treatment for their mental health is whether the train of thought is interfering with their enjoyment of life.
Do you enjoy thinking about these topics and discussing them?
If you don’t—if it just stresses you out and makes the light of life shine less bright, then it’s not a bad idea to step away from it or take a break. Even if AI is going to destroy the world, that day isn’t today and arguably the threat of that looming over you sooner than a natural demise increases the value of the days you have that are good. Don’t squander a limited resource.
But if you enjoy the discussions and the debates, if you find the topic stimulating and the problem space interesting—you’re going to whittle your days away doing something no matter how you spend your time. It might as well be working on something fun that you believe in and feel may make a difference to the world. Even if your worries are overblown, time spent on something you enjoy with people you respect isn’t time wasted.
Health is a spectrum and too much of a good thing isn’t good at all. But only you can decide what’s too much and what’s the right amount. So if you feel it’s too much, you can scale it back. And if you feel it’s working out well for you, more power to you—the sense of feeling in the right place at the right time (even if under perceived dire circumstances) is a bit of a rarity in the human experience.
In general—enjoy life while it lasts. No matter your objective p(doom), your relative p(doom) is 100%. Make the most of the time you have.
This is good advice, but you must recognize that it’s also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There’s been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn’t exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
And slightly reduce our odds of a grand future...
I can’t recall another time when someone shared their personal feelings and experiences and someone else declared it “propaganda and alarmism”. I haven’t seen “zero-risker” types do the same, but I would be curious to hear the tale and, if they share it, I don’t think anyone one will call it “propaganda and killeveryoneism”.
It’s not propaganda. OP clearly believes strongly in the sentiments discussed in the post, and its mostly a timeline of personal response to outside events than a piece meant to misinform or sway others regarding those events.
And while you do you in terms of your mental health, people who want to actually be “less wrong” in life would be wise to seek out and surround themselves by ideas different from their own.
Yes, LW has a certain broad bias, and so ironically for most people here I suspect it serves this role “less well” than it could in helping most of its users be less wrong. But particularly if you disagree with the prevailing views of the community, that makes it an excellent place to spend your time in listening, even it if can create a somewhat toxic environment for partaking in discussions and debate.
It can be a rarity to find spaces where people you disagree with take time to write out well written and clearly thought out pieces on their thoughts and perspectives. At least in my own lived experiences, many of my best insights and ideas were the result of strongly disagreeing with something I read and pursuing the train of thought resulting from that exposure.
Sycophantic agreement can give a bit of a dopamine kick, but I tend to find it next to worthless for advancing my own thinking. Give me an articulate and intelligent “no-person” any day over a “yes-person.”
Also, very few topics are actually binaries even if our brains tend towards categorizing them as such. Data doesn’t tend to truly map to only one axis, and it typically even mapped to a single axis it falls along a spectrum. It’s possible to disagree about the spectrum of a single axis of a topic while finding insight and agreement about a different axis.
Taking what works and leaving what doesn’t is probably the most useful skill one can develop in information analysis.