Notice that I wouldn’t use “avoid UFAI danger” as a metric. If the MIRI people are motivated to answer interesting questions about decision theory and coordination between agents-who-can-read-source-code, I think they’re doing worthwhile work.
Worthwhile? Maybe. But it seems dishonest to collect donations that are purportedly for avoiding UFAI danger if they don’t actually result in avoiding UFAI danger.
Worthwhile? Maybe. But it seems dishonest to collect donations that are purportedly for avoiding UFAI danger if they don’t actually result in avoiding UFAI danger.