Curated. I thought this was a valuable list of areas, most of which I haven’t thought that much about, and I’ve certainly never seen them brought together in one place before, which I think itself is a handy pointer to the sort of work that needs doing.
So, I really, really am not trying to be snarky here but am worried this comment will come across this way regardless. I think this is actually quite important as a core factual question given that you’ve been around this community for a while, and I’m asking you in your capacity as “person who’s been around for a minute”. It’s non-hyperbolically true that no one has published this sort of list before in this community?
I’m asking, because if that’s the case, someone should, e.g., just write a series of posts that just marches through US government best-practices documents on these domains (e.g., Chemical Safety Board, DoD NISPOM, etc.) and draws out conclusions on AI policy.
I think humanity’s defense against extinction (slash eternal dictatorship) from AGI is an odd combination of occasionally some very competent people, occasionally some very rich people, and one or two teams, but mostly a lot of ragtag nerds in their off-hours, scrambling to get something together. I think it is not the case that you should assume the obvious and basic things to your eyes will be covered, and there are many dropped balls, even though sometimes some very smart and competent people have attempted to do something very well.
To answer the question you asked, I think that actually there have been some related lists before, but I think they’ve been about fields relevant for AI alignment research or AI policy and government analysis, whereas this list is more relevant for controlling AI systems in worlds where there are many unaligned AI systems operating alongside humans for multiple years.
Thanks, this is helpful! I also think this helps me understand a lot better what is intended to be different about @Buck ’s research agenda from others, that I didn’t understand previously.
Open Philanthropy commissioned five case studies of this sort, which ended up being written by Moritz von Knebel; as far as I know they haven’t been published, but plausibly someone could convince him to.
They have in fact been published (it’s in your link), at least the ones authors agreed to make publicly available: these are all the case studies, and Moritz von Knebel’s write-ups are
I think that would be a good series of posts! Especially if the person was reviewing the recommendations analytically, trying to figure out if they make sense in the source domain, seeing if they make sense for AI, and so on.
Curated. I thought this was a valuable list of areas, most of which I haven’t thought that much about, and I’ve certainly never seen them brought together in one place before, which I think itself is a handy pointer to the sort of work that needs doing.
So, I really, really am not trying to be snarky here but am worried this comment will come across this way regardless. I think this is actually quite important as a core factual question given that you’ve been around this community for a while, and I’m asking you in your capacity as “person who’s been around for a minute”. It’s non-hyperbolically true that no one has published this sort of list before in this community?
I’m asking, because if that’s the case, someone should, e.g., just write a series of posts that just marches through US government best-practices documents on these domains (e.g., Chemical Safety Board, DoD NISPOM, etc.) and draws out conclusions on AI policy.
I think humanity’s defense against extinction (slash eternal dictatorship) from AGI is an odd combination of occasionally some very competent people, occasionally some very rich people, and one or two teams, but mostly a lot of ragtag nerds in their off-hours, scrambling to get something together. I think it is not the case that you should assume the obvious and basic things to your eyes will be covered, and there are many dropped balls, even though sometimes some very smart and competent people have attempted to do something very well.
To answer the question you asked, I think that actually there have been some related lists before, but I think they’ve been about fields relevant for AI alignment research or AI policy and government analysis, whereas this list is more relevant for controlling AI systems in worlds where there are many unaligned AI systems operating alongside humans for multiple years.
Thanks, this is helpful! I also think this helps me understand a lot better what is intended to be different about @Buck ’s research agenda from others, that I didn’t understand previously.
Open Philanthropy commissioned five case studies of this sort, which ended up being written by Moritz von Knebel; as far as I know they haven’t been published, but plausibly someone could convince him to.
They have in fact been published (it’s in your link), at least the ones authors agreed to make publicly available: these are all the case studies, and Moritz von Knebel’s write-ups are
Responsible Care & Security Code (ACC)
Chemical Facility Anti-Terrorism Standards (CFATS)
Federal Aviation Administration Standards (FAA)
IAEA Safeguards
Probabilistic Risk Assessment (PRA) for Nuclear Risk
PRAs in other industries
I think that would be a good series of posts! Especially if the person was reviewing the recommendations analytically, trying to figure out if they make sense in the source domain, seeing if they make sense for AI, and so on.