My cynical view is: write some papers about how the problems they need to solve are really hard; write enough papers each year to appear to be making progress, and live lives of luxury.
I think this fine if the papers are good. It’s routine in academic research that somebody says “I’m working on curing cancer.” And then it turns out that they’re really studying one little gene that’s related to some set of cancers. In general, it’s utterly normal in the academy that somebody announces dramatic goal A, and then really works on subproblem D that might ultimately help achieve C, and then B, an important special case of A.
When founding the first police force, one of Peel’s key principles was that the only way they could be evaluated was the prevalence of crime—not how much work the police were seen to be doing, not how good the public felt about their efforts. It’s very hard to find a similar standard with which to hold LW to account.
The standard I would use is “are there a significant number of people who find the papers interesting and useful?” And that’s a standard that I think MIRI is improving significantly on. A large fraction of academics with tenure in top-50 computer science departments aren’t work that’s better.
Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Worthwhile? Maybe. But it seems dishonest to collect donations that are purportedly for avoiding UFAI danger if they don’t actually result in avoiding UFAI danger.
I think this fine if the papers are good. It’s routine in academic research that somebody says “I’m working on curing cancer.” And then it turns out that they’re really studying one little gene that’s related to some set of cancers. In general, it’s utterly normal in the academy that somebody announces dramatic goal A, and then really works on subproblem D that might ultimately help achieve C, and then B, an important special case of A.
The standard I would use is “are there a significant number of people who find the papers interesting and useful?” And that’s a standard that I think MIRI is improving significantly on. A large fraction of academics with tenure in top-50 computer science departments aren’t work that’s better.
Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Worthwhile? Maybe. But it seems dishonest to collect donations that are purportedly for avoiding UFAI danger if they don’t actually result in avoiding UFAI danger.