If you think that we have a fair shot at stopping AI apocalypse, and that AGI is a short-term risk, then it is absolutely rational to optimize for solving AI safety. This is true even if you are entirely selfish.
Also, this essay is about advice given to ambitious people. It’s not about individual people choosing unambitious paths (wheeling). Charlie is a sad example of what can happen to you. I’m not complaining about him.
Yes, selfish agents want to not get turned into paperclips. But they have other goals too. You can prefer alignment be solved, while not wanting to dedicate your mind, body, and soul to waging a jihad against it. Where can Charlie effectively donate, say, 10% of his salary to best mitigate x-risk? Not MIRI (according to MIRI).
If you think that we have a fair shot at stopping AI apocalypse, and that AGI is a short-term risk, then it is absolutely rational to optimize for solving AI safety. This is true even if you are entirely selfish.
Also, this essay is about advice given to ambitious people. It’s not about individual people choosing unambitious paths (wheeling). Charlie is a sad example of what can happen to you. I’m not complaining about him.
Yes, selfish agents want to not get turned into paperclips. But they have other goals too. You can prefer alignment be solved, while not wanting to dedicate your mind, body, and soul to waging a jihad against it. Where can Charlie effectively donate, say, 10% of his salary to best mitigate x-risk? Not MIRI (according to MIRI).