I’m no longer employed by MIRI. I think Yudkowsky is by far the best source of technical alignment research insight; but MIRI’s research program was in retrospect probably pretty doomed even before I got there. I can see ways to improve it but I’m not that confident and I can somewhat directly see that I’m probably not capable of carrying out my suggested improvements. And AFAIK, as you say they’re not currently doing very much alignment research. I’m also fine with appearing self-serving; if I were actively doing alignment research, I might recommend myself, though I don’t really think it’s appropriate to do so to a random person who can’t evaluate arguments about alignment research and doesn’t know who to trust. I guess if someone pays me enough I’ll do some alignment research. I recommend myself as one authority among others on strategy regarding strong human intelligence amplification.
I’m not saying that MIRI has some effective plan which more money would help with. I’m only saying that unlike most of the actors accepting money to work in AI Safety, at least they won’t use a donation in a way that makes the situation worse. Specifically, MIRI does not publish insights that help the AI project and is very careful in choosing whom they will teach technical AI skills and knowledge.
at least they won’t use a donation in a way that makes the situation worse
Seems false, they could have problematic effects on discourse if their messaging is poor or seems dumb in retrospect.
I disagree pretty heavily with MIRI which makes this more likely from my perspective.
It seems likely that Yudkowsky has lots of bad effects on discourse right now even from his own lights. I feel pretty good about official MIRI comms activities from my understanding despite a number of disagreements.
Not sure what you’re asking. I think someone trying to work on the technical problem of AI alignment should read Yudkowsky. I think this because… of a whole bunch of the content of ideas and arguments. Would need more context to elaborate, but it doesn’t seem like you’re asking about that.
I’m no longer employed by MIRI. I think Yudkowsky is by far the best source of technical alignment research insight; but MIRI’s research program was in retrospect probably pretty doomed even before I got there. I can see ways to improve it but I’m not that confident and I can somewhat directly see that I’m probably not capable of carrying out my suggested improvements. And AFAIK, as you say they’re not currently doing very much alignment research. I’m also fine with appearing self-serving; if I were actively doing alignment research, I might recommend myself, though I don’t really think it’s appropriate to do so to a random person who can’t evaluate arguments about alignment research and doesn’t know who to trust. I guess if someone pays me enough I’ll do some alignment research. I recommend myself as one authority among others on strategy regarding strong human intelligence amplification.
I’m not saying that MIRI has some effective plan which more money would help with. I’m only saying that unlike most of the actors accepting money to work in AI Safety, at least they won’t use a donation in a way that makes the situation worse. Specifically, MIRI does not publish insights that help the AI project and is very careful in choosing whom they will teach technical AI skills and knowledge.
Seems false, they could have problematic effects on discourse if their messaging is poor or seems dumb in retrospect.
I disagree pretty heavily with MIRI which makes this more likely from my perspective.
It seems likely that Yudkowsky has lots of bad effects on discourse right now even from his own lights. I feel pretty good about official MIRI comms activities from my understanding despite a number of disagreements.
Not sure what you’re asking. I think someone trying to work on the technical problem of AI alignment should read Yudkowsky. I think this because… of a whole bunch of the content of ideas and arguments. Would need more context to elaborate, but it doesn’t seem like you’re asking about that.
I still don’t know what you mean.