I suspect my advice is the exact opposite of they Less Wrong/EY consensus, so here goes:
Choose to work at whatever company will allow you personally to get as good at AI/Machine learning as possible.
This is a restatement of my advice at the end of my essay on AI Alignment. Specifically, the two strategies I am the most optimistic about, Game Theory and The Plan both depend on very smart people becoming as wise as possible before the Singularity comes.
From a game-theory point of view, advancing AI knowledge in general is a tragedy of the commons. It would require coordination from everyone all at once in order to stop advancing AI beyond the danger level (whatever that might be). And it isn’t even possible to know if a particular field (compilers, formal mathematical methods, hardware improvement, AI art) will be the one that puts us over the top. That means there is very little benefit for you not to work on advancing AI (and it comes at a huge cost, since you basically have to give up on any career even tangentially related to technology).
On the other hand AI Alignment is likely to be solved by a “small group of thoughtful individuals”. Increasing your skills proportionally increases your chance of being a member of this category (since it seems like you already care about the topic).
One way to think about this advice is: every day Google, Open AI, Hugging Face, and 1000 other companies are hiring someone and that someone will likely work to advance AI. If we imagine the marginal case where a company is deciding between hiring you and someone slightly less concerned about AI alignment. Wouldn’t you rather they hire you?
Note that this advice does not mean you get to leave your ethics at the door. Quite the opposite, if you are working somewhere and it turns out they are doing something egregious stupid (like deploying a non-airgapped AI), it is your duty to do everything in your power to stop them. Complain to your boss, leak information to the press, chain yourself to the server. Whatever you do, do not become the engineer who warned about disaster but then quietly shrugged when pressured by management. But if you refuse to take any jobs related to AI, you won’t even be in the room when the disaster is about to happen. And on the margin, you should assume that somebody worse will be.
Can someone who downvote the agreement karma please enlighten me as to why they disagree? This really seems like the only way forward. (Trying to make my career choice right now as I am beginning my masters research this year)
I think this was worse than the worst advice I could have been asked to imagine. Lines like this:
One way to think about this advice is: every day Google, Open AI, Hugging Face, and 1000 other companies are hiring someone and that someone will likely work to advance AI. If we imagine the marginal case where a company is deciding between hiring you and someone slightly less concerned about AI alignment. Wouldn’t you rather they hire you?
almost seem deliberately engineered, as if you’re trying to use the questioner’s biases against them. If OP is reading my comment, I’d like him to consider whether or not everyone doing what this commenter wants results in anything different than the clusterfuck of a situation we currently have.
Imagine if someone was concerned about contributing to the holocaust, and someone else told them that if they were really concerned what they ought to do was try to reform the Schutzstaffel from the “inside”. After all, they’re going to hire someone, and it’d of course be better for them to hire you than some other guy. You’re a good person OP, aren’t you? When you’ve transported all those prisoners you can just choose to pointlessly get shot trying to defend them from all of the danger you put them in.
Imagine if someone was concerned about contributing to the holocaust
This is an uncharitable characterization of my advice. AI is not literally the holocaust. Like all technology, it is morally neutral. At worst it is a nuclear weapon. And at best, Aligned AI is an enormously positive good.
I suspect my advice is the exact opposite of they Less Wrong/EY consensus, so here goes:
Choose to work at whatever company will allow you personally to get as good at AI/Machine learning as possible.
This is a restatement of my advice at the end of my essay on AI Alignment. Specifically, the two strategies I am the most optimistic about, Game Theory and The Plan both depend on very smart people becoming as wise as possible before the Singularity comes.
From a game-theory point of view, advancing AI knowledge in general is a tragedy of the commons. It would require coordination from everyone all at once in order to stop advancing AI beyond the danger level (whatever that might be). And it isn’t even possible to know if a particular field (compilers, formal mathematical methods, hardware improvement, AI art) will be the one that puts us over the top. That means there is very little benefit for you not to work on advancing AI (and it comes at a huge cost, since you basically have to give up on any career even tangentially related to technology).
On the other hand AI Alignment is likely to be solved by a “small group of thoughtful individuals”. Increasing your skills proportionally increases your chance of being a member of this category (since it seems like you already care about the topic).
One way to think about this advice is: every day Google, Open AI, Hugging Face, and 1000 other companies are hiring someone and that someone will likely work to advance AI. If we imagine the marginal case where a company is deciding between hiring you and someone slightly less concerned about AI alignment. Wouldn’t you rather they hire you?
Note that this advice does not mean you get to leave your ethics at the door. Quite the opposite, if you are working somewhere and it turns out they are doing something egregious stupid (like deploying a non-airgapped AI), it is your duty to do everything in your power to stop them. Complain to your boss, leak information to the press, chain yourself to the server. Whatever you do, do not become the engineer who warned about disaster but then quietly shrugged when pressured by management. But if you refuse to take any jobs related to AI, you won’t even be in the room when the disaster is about to happen. And on the margin, you should assume that somebody worse will be.
Can someone who downvote the agreement karma please enlighten me as to why they disagree? This really seems like the only way forward. (Trying to make my career choice right now as I am beginning my masters research this year)
I didn’t downvote, but your suggestion seems obviously wrong to me, so:
Working in one of those companies (assuming you have added value to them) is a pretty high confidence way to get unfriendly AGI faster.
If you want to build skills, there are lots of ways to do that without working at very dangerous companies.
It wasn’t my suggestion it was Logan Zoellner’s post
Hm, can we even reliably tell when the AI capabilities have reached the “danger level”?
I think this was worse than the worst advice I could have been asked to imagine. Lines like this:
almost seem deliberately engineered, as if you’re trying to use the questioner’s biases against them. If OP is reading my comment, I’d like him to consider whether or not everyone doing what this commenter wants results in anything different than the clusterfuck of a situation we currently have.
Imagine if someone was concerned about contributing to the holocaust, and someone else told them that if they were really concerned what they ought to do was try to reform the Schutzstaffel from the “inside”. After all, they’re going to hire someone, and it’d of course be better for them to hire you than some other guy. You’re a good person OP, aren’t you? When you’ve transported all those prisoners you can just choose to pointlessly get shot trying to defend them from all of the danger you put them in.
This is an uncharitable characterization of my advice. AI is not literally the holocaust. Like all technology, it is morally neutral. At worst it is a nuclear weapon. And at best, Aligned AI is an enormously positive good.