I saw this thread complaining about the state of peer review in machine learning. Has anyone thought about trying to design a better peer review system, then creating a new ML conference around it and also adding in a safety emphasis?
Yes (though just the peer review, not the safety emphasis). I can send you thoughts about it if you’d like, email me at <my LW username> at gmail.
I thought about the differential development point and came away thinking it would be net positive, and convinced a few other people as well, even if it’s just modifying peer review without having safety researchers run the conference.
I guess another way of thinking about this is not a safety emphasis so much as a forecasting emphasis. Reminds me of our previous discussion here. If someone could invent new scientific institutions which reward accurate forecasts about scientific progress, that could be really helpful for knowing how AI will progress and building consensus regarding which approaches are safe/unsafe.
+1, that’s basically the story I have in mind. I think of it as less about forecasting and more about understanding deep learning and how it works, but I think it serves basically the same purpose: it’s helpful for knowing how AI will progress and building consensus about what’s safe / unsafe.
The idea is that if the conference is run by people who are interested in safety, they can preferentially accept papers which are good from a differential technological development point of view.
I saw this thread complaining about the state of peer review in machine learning. Has anyone thought about trying to design a better peer review system, then creating a new ML conference around it and also adding in a safety emphasis?
Yes (though just the peer review, not the safety emphasis). I can send you thoughts about it if you’d like, email me at <my LW username> at gmail.
I thought about the differential development point and came away thinking it would be net positive, and convinced a few other people as well, even if it’s just modifying peer review without having safety researchers run the conference.
Cool!
I guess another way of thinking about this is not a safety emphasis so much as a forecasting emphasis. Reminds me of our previous discussion here. If someone could invent new scientific institutions which reward accurate forecasts about scientific progress, that could be really helpful for knowing how AI will progress and building consensus regarding which approaches are safe/unsafe.
+1, that’s basically the story I have in mind. I think of it as less about forecasting and more about understanding deep learning and how it works, but I think it serves basically the same purpose: it’s helpful for knowing how AI will progress and building consensus about what’s safe / unsafe.
I’m vaguely worried that this might be net-negative for ML in particular, if you’re worried about differential tech development.
The idea is that if the conference is run by people who are interested in safety, they can preferentially accept papers which are good from a differential technological development point of view.