I think this was worse than the worst advice I could have been asked to imagine. Lines like this:
One way to think about this advice is: every day Google, Open AI, Hugging Face, and 1000 other companies are hiring someone and that someone will likely work to advance AI. If we imagine the marginal case where a company is deciding between hiring you and someone slightly less concerned about AI alignment. Wouldn’t you rather they hire you?
almost seem deliberately engineered, as if you’re trying to use the questioner’s biases against them. If OP is reading my comment, I’d like him to consider whether or not everyone doing what this commenter wants results in anything different than the clusterfuck of a situation we currently have.
Imagine if someone was concerned about contributing to the holocaust, and someone else told them that if they were really concerned what they ought to do was try to reform the Schutzstaffel from the “inside”. After all, they’re going to hire someone, and it’d of course be better for them to hire you than some other guy. You’re a good person OP, aren’t you? When you’ve transported all those prisoners you can just choose to pointlessly get shot trying to defend them from all of the danger you put them in.
Imagine if someone was concerned about contributing to the holocaust
This is an uncharitable characterization of my advice. AI is not literally the holocaust. Like all technology, it is morally neutral. At worst it is a nuclear weapon. And at best, Aligned AI is an enormously positive good.
I think this was worse than the worst advice I could have been asked to imagine. Lines like this:
almost seem deliberately engineered, as if you’re trying to use the questioner’s biases against them. If OP is reading my comment, I’d like him to consider whether or not everyone doing what this commenter wants results in anything different than the clusterfuck of a situation we currently have.
Imagine if someone was concerned about contributing to the holocaust, and someone else told them that if they were really concerned what they ought to do was try to reform the Schutzstaffel from the “inside”. After all, they’re going to hire someone, and it’d of course be better for them to hire you than some other guy. You’re a good person OP, aren’t you? When you’ve transported all those prisoners you can just choose to pointlessly get shot trying to defend them from all of the danger you put them in.
This is an uncharitable characterization of my advice. AI is not literally the holocaust. Like all technology, it is morally neutral. At worst it is a nuclear weapon. And at best, Aligned AI is an enormously positive good.