There was some in-person conversation about the papers among us, but that’s about it. I’ve not seen a strong community develop around this so far; mostly people just publish things one-off and then they go into a void where no one builds on each others work. I think this mostly represents the early stage of the field and the lack of anyone very dedicated to it, though, as I got the impression that most of us were just dabbling in this topic because it was near-by things we were already interested in and had some ideas about it.
Ok, that’s what I was afraid of, and what I’m hoping to see change. Since you seem to have thought about this for longer than I have, do you have any suggestions about what to do?
Having just come back from EA Global in SF I will say I have a much stronger sense that there are a decent number of people hoping to start thinking and talking about coordination for AI safety and there were at least a significant number of people there (maybe as many as 30) talking to each other at the conference about it. I’d now update my answer to say I am more confident that there is some dedicated effort happening in this direction, including from Center for Emerging Technologies, Global Catastrophic Risk Initiative, and others spread out over multiple organizations.
There was some in-person conversation about the papers among us, but that’s about it. I’ve not seen a strong community develop around this so far; mostly people just publish things one-off and then they go into a void where no one builds on each others work. I think this mostly represents the early stage of the field and the lack of anyone very dedicated to it, though, as I got the impression that most of us were just dabbling in this topic because it was near-by things we were already interested in and had some ideas about it.
Ok, that’s what I was afraid of, and what I’m hoping to see change. Since you seem to have thought about this for longer than I have, do you have any suggestions about what to do?
Having just come back from EA Global in SF I will say I have a much stronger sense that there are a decent number of people hoping to start thinking and talking about coordination for AI safety and there were at least a significant number of people there (maybe as many as 30) talking to each other at the conference about it. I’d now update my answer to say I am more confident that there is some dedicated effort happening in this direction, including from Center for Emerging Technologies, Global Catastrophic Risk Initiative, and others spread out over multiple organizations.