I’m talking about science of governance, digitalised governance, and theories of contracting, rather than not-so-technical object-level policy and governance work that is currently done at institutions. And this is absolutely not to the detriment of that work, but just as a selection criteria for this post, which could decide to focus on technical agendas where technical visitors of LW may contribute to.
The view that there is a sharp divide between “AGI-level safety” and “near-term AI safety and ethics” is itself controversial, e.g., Scott Aaronson doesn’t share it. I guess this isn’t a justification for including all AI ethics work that is happening, but of the NSF projects, definitely more than one (actually, most of them) appear to me upon reading abstracts as potentially relevant for AGI safety. Note that this grant program of NSF is in a partnership with Open Philanthropy and OpenPhil staff participate in the evaluation of the projects. So, I don’t think they would select a lot of projects irrelevant for AGI safety.
I’m talking about science of governance, digitalised governance, and theories of contracting, rather than not-so-technical object-level policy and governance work that is currently done at institutions. And this is absolutely not to the detriment of that work, but just as a selection criteria for this post, which could decide to focus on technical agendas where technical visitors of LW may contribute to.
The view that there is a sharp divide between “AGI-level safety” and “near-term AI safety and ethics” is itself controversial, e.g., Scott Aaronson doesn’t share it. I guess this isn’t a justification for including all AI ethics work that is happening, but of the NSF projects, definitely more than one (actually, most of them) appear to me upon reading abstracts as potentially relevant for AGI safety. Note that this grant program of NSF is in a partnership with Open Philanthropy and OpenPhil staff participate in the evaluation of the projects. So, I don’t think they would select a lot of projects irrelevant for AGI safety.
If the funder comes through I’ll consider a second review post I think