This is good advice, but you must recognize that it’s also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There’s been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn’t exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
This is good advice, but you must recognize that it’s also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There’s been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn’t exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
And slightly reduce our odds of a grand future...