I’m not sure I agree with Jessica’s interpretation of Eliezer’s tweets, but I do think they illustrate an important point about MIRI: MIRI can’t seem to decide if it’s an advocacy org or a research org.
“if you actually knew how deep neural networks were solving your important mission-critical problems, you’d never stop screaming” is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. “taxation is theft”). People like Chris Olah have studied how neural nets solve problems a lot, and I’ve never heard of them screaming about what they discovered.
Suppose there was a libertarian advocacy group with a bombastic leader who liked to go tweeting things like “if you realized how bad taxation is for the economy, you’d never stop screaming”. After a few years of advocacy, the group decides they want to switch to being a think tank. Suppose they hire some unusually honest economists, who study taxation and notice things in the data that kinda suggest taxation might actually be good for the economy sometimes. Imagine you’re one of those economists and you’re gonna ask your boss about looking into this more. You might have second thoughts like: Will my boss scream at me? Will they fire me? The organizational incentives don’t seem to favor truthseeking.
Another issue with advocacy is you can get so caught up in convincing people that the problem needs to be solved that you forget to solve it, or even take actions that are counterproductive for solving it. For AI safety advocacy, you want to convince everyone that the problem is super difficult and requires more attention and resources. But for AI safety research, you want to make the problem easy, and solve it with the attention and resources you have.
In The Algorithm Design Manual, Steven Skiena writes:
In any group brainstorming session, the most useful person in the room is the one who keeps asking “Why can’t we do it this way?”; not the nitpicker who keeps telling them why. Because he or she will eventually stumble on an approach that can’t be shot down… The correct answer to “Can I do it this way?” is never “no,” but “no, because. . . .” By clearly articulating your reasoning as to why something doesn’t work, you can check whether you have glossed over a possibility that you didn’t think hard enough about. It is amazing how often the reason you can’t find a convincing explanation for something is because your conclusion is wrong.
Being an advocacy org means you’re less likely to hire people who continually ask “Why can’t we do it this way?”, and those who are hired will be discouraged from this behavior if it’s implied that a leader might scream if they dislike the proposed solution. The activist mindset tends to favor evidence-free hyperbole over carefully checking if you glossed over a possibility, or wondering if an inability to convince others means your conclusion is wrong.
I dunno if there’s an easy solution to this—I would like to see both advocacy work and research work regarding AI safety. But having them in the same org seems potentially suboptimal.
MIRI can’t seem to decide if it’s an advocacy org or a research org.
MIRI is a research org. It is not an advocacy org. It is not even close. You can tell by the fact that it basically hasn’t said anything for the last 4 years. Eliezer’s personal twitter account does not make MIRI an advocacy org.
(I recognize this isn’t addressing your actual point. I just found the frame frustrating.)
as a tiny, mostly-uninformed data point, i read “if you realized how bad taxation is for the economy, you’d never stop screaming” to have a very diff vibe from Eliezer’s tweet, cause he didn’t use the word bad. I know it’s a small diff but it hits diff. Something in his tweet was amusing because it felt like it was pointing to a presumably neutral thing and making it scary? whereas saying the same thing about a clearly moralistic point seems like it’s doing a different thing.
Again—a very minor point here, just wanted to throw it in.
I’m not sure I agree with Jessica’s interpretation of Eliezer’s tweets, but I do think they illustrate an important point about MIRI: MIRI can’t seem to decide if it’s an advocacy org or a research org.
“if you actually knew how deep neural networks were solving your important mission-critical problems, you’d never stop screaming” is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. “taxation is theft”). People like Chris Olah have studied how neural nets solve problems a lot, and I’ve never heard of them screaming about what they discovered.
Suppose there was a libertarian advocacy group with a bombastic leader who liked to go tweeting things like “if you realized how bad taxation is for the economy, you’d never stop screaming”. After a few years of advocacy, the group decides they want to switch to being a think tank. Suppose they hire some unusually honest economists, who study taxation and notice things in the data that kinda suggest taxation might actually be good for the economy sometimes. Imagine you’re one of those economists and you’re gonna ask your boss about looking into this more. You might have second thoughts like: Will my boss scream at me? Will they fire me? The organizational incentives don’t seem to favor truthseeking.
Another issue with advocacy is you can get so caught up in convincing people that the problem needs to be solved that you forget to solve it, or even take actions that are counterproductive for solving it. For AI safety advocacy, you want to convince everyone that the problem is super difficult and requires more attention and resources. But for AI safety research, you want to make the problem easy, and solve it with the attention and resources you have.
In The Algorithm Design Manual, Steven Skiena writes:
Being an advocacy org means you’re less likely to hire people who continually ask “Why can’t we do it this way?”, and those who are hired will be discouraged from this behavior if it’s implied that a leader might scream if they dislike the proposed solution. The activist mindset tends to favor evidence-free hyperbole over carefully checking if you glossed over a possibility, or wondering if an inability to convince others means your conclusion is wrong.
I dunno if there’s an easy solution to this—I would like to see both advocacy work and research work regarding AI safety. But having them in the same org seems potentially suboptimal.
MIRI is a research org. It is not an advocacy org. It is not even close. You can tell by the fact that it basically hasn’t said anything for the last 4 years. Eliezer’s personal twitter account does not make MIRI an advocacy org.
(I recognize this isn’t addressing your actual point. I just found the frame frustrating.)
as a tiny, mostly-uninformed data point, i read “if you realized how bad taxation is for the economy, you’d never stop screaming” to have a very diff vibe from Eliezer’s tweet, cause he didn’t use the word bad. I know it’s a small diff but it hits diff. Something in his tweet was amusing because it felt like it was pointing to a presumably neutral thing and making it scary? whereas saying the same thing about a clearly moralistic point seems like it’s doing a different thing.
Again—a very minor point here, just wanted to throw it in.