Yes (though just the peer review, not the safety emphasis). I can send you thoughts about it if you’d like, email me at <my LW username> at gmail.
I thought about the differential development point and came away thinking it would be net positive, and convinced a few other people as well, even if it’s just modifying peer review without having safety researchers run the conference.
I guess another way of thinking about this is not a safety emphasis so much as a forecasting emphasis. Reminds me of our previous discussion here. If someone could invent new scientific institutions which reward accurate forecasts about scientific progress, that could be really helpful for knowing how AI will progress and building consensus regarding which approaches are safe/unsafe.
+1, that’s basically the story I have in mind. I think of it as less about forecasting and more about understanding deep learning and how it works, but I think it serves basically the same purpose: it’s helpful for knowing how AI will progress and building consensus about what’s safe / unsafe.
Yes (though just the peer review, not the safety emphasis). I can send you thoughts about it if you’d like, email me at <my LW username> at gmail.
I thought about the differential development point and came away thinking it would be net positive, and convinced a few other people as well, even if it’s just modifying peer review without having safety researchers run the conference.
Cool!
I guess another way of thinking about this is not a safety emphasis so much as a forecasting emphasis. Reminds me of our previous discussion here. If someone could invent new scientific institutions which reward accurate forecasts about scientific progress, that could be really helpful for knowing how AI will progress and building consensus regarding which approaches are safe/unsafe.
+1, that’s basically the story I have in mind. I think of it as less about forecasting and more about understanding deep learning and how it works, but I think it serves basically the same purpose: it’s helpful for knowing how AI will progress and building consensus about what’s safe / unsafe.