There are now quite a lot of AI alignment research organizations, of widely varying quality. I’d name the two leading ones right now as Redwood and Anthropic, not MIRI (which is in something of a rut technically). Here’s a big review of the different orgs by Larks:
I don’t know, I might be wrong here but seems to me that most serious AGI x-risk research comes from MIRI-affiliated people. Most other organisations (with exceptions) seem to mostly write hacky math-free papers. Is there particular research you like?
There are now quite a lot of AI alignment research organizations, of widely varying quality. I’d name the two leading ones right now as Redwood and Anthropic, not MIRI (which is in something of a rut technically). Here’s a big review of the different orgs by Larks:
https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison
I don’t know, I might be wrong here but seems to me that most serious AGI x-risk research comes from MIRI-affiliated people. Most other organisations (with exceptions) seem to mostly write hacky math-free papers. Is there particular research you like?
https://transformer-circuits.pub/ seems impressive to me!