I’m no longer employed by MIRI. I think Yudkowsky is by far the best source of technical alignment research insight; but MIRI’s research program was in retrospect probably pretty doomed even before I got there. I can see ways to improve it but I’m not that confident and I can somewhat directly see that I’m probably not capable of carrying out my suggested improvements. And AFAIK, as you say they’re not currently doing very much alignment research. I’m also fine with appearing self-serving; if I were actively doing alignment research, I might recommend myself, though I don’t really think it’s appropriate to do so to a random person who can’t evaluate arguments about alignment research and doesn’t know who to trust. I guess if someone pays me enough I’ll do some alignment research. I recommend myself as one authority among others on strategy regarding strong human intelligence amplification.
I’m no longer employed by MIRI. I think Yudkowsky is by far the best source of technical alignment research insight; but MIRI’s research program was in retrospect probably pretty doomed even before I got there. I can see ways to improve it but I’m not that confident and I can somewhat directly see that I’m probably not capable of carrying out my suggested improvements. And AFAIK, as you say they’re not currently doing very much alignment research. I’m also fine with appearing self-serving; if I were actively doing alignment research, I might recommend myself, though I don’t really think it’s appropriate to do so to a random person who can’t evaluate arguments about alignment research and doesn’t know who to trust. I guess if someone pays me enough I’ll do some alignment research. I recommend myself as one authority among others on strategy regarding strong human intelligence amplification.