I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is “am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?”
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let’s bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don’t think that’s how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don’t have it all spelled out in mathematical detail. I certainly don’t.
But my point is, just telling them “hey you should do something that makes you happy” doesn’t address the reasons they’re doing what they are, for most alignment people, because they have very specific logic for why they’re doing what they are.
On the other hand, some of them did just start out thinking “this sounds fun” and have found out it’s not, and reminding them to ask if that’s the case could make them happy.
And slightly reduce our odds of a grand future...