MIRI should have a form you can fill out (or an email address) specifically for people who think they’ve advanced AI and are worried about ending the world with it. MIRI should also have Cold-War-style hotlines to OpenAI, DeepMind, and the other major actors.
Does MIRI do much in the way of capabilities research? It is my understanding that they don’t. If MIRI doesn’t do capabilities research then it seems unlikely to me they would do much with an idea that is all about capabilities advancement.
I don’t think they do either. I was thinking they would provide alignment advice / troubleshooting services and would facilitate multipolar coordination in the event of a multiple slow-takeoff scenario.
MIRI should have a form you can fill out (or an email address) specifically for people who think they’ve advanced AI and are worried about ending the world with it. MIRI should also have Cold-War-style hotlines to OpenAI, DeepMind, and the other major actors.
Does MIRI do much in the way of capabilities research? It is my understanding that they don’t. If MIRI doesn’t do capabilities research then it seems unlikely to me they would do much with an idea that is all about capabilities advancement.
I don’t think they do either. I was thinking they would provide alignment advice / troubleshooting services and would facilitate multipolar coordination in the event of a multiple slow-takeoff scenario.
I guess the process would be to pass it on to whichever cababilities researchers they trust with it. There would be a few of them at this point.
So, why not go straight to those researchers instead of MIRI? Because MIRI are more legible responsible intermediaries I guess.