Hmm, I think the current situation is a bit more complicated. Yes, we can’t just bring in a safety consultant to try to fix things up, but it’s also the case that safety is not always something there’s a way to meaningfully talk about with everyones’ research because it’s so far away from safety. To use the bridge metaphor, it would be like talking about bridge safety when you’re doing research on mortar: yes, mortar has impacts on safety, but it’s also pretty far removed until you put it in the context of a full system and very few people are doing something on the order of building a bridge/AGI (at least at this conference) and instead were focused on improvements to algorithms and architectures that they believe are on the path to figuring out how to build the thing at all.
That said, I think all of your suggested actions sound reasonable, because it seems to me now the primary issue may simply be changing the culture in AI/AGI research to have a much stronger safety focus.
Hmm, I think the current situation is a bit more complicated. Yes, we can’t just bring in a safety consultant to try to fix things up, but it’s also the case that safety is not always something there’s a way to meaningfully talk about with everyones’ research because it’s so far away from safety. To use the bridge metaphor, it would be like talking about bridge safety when you’re doing research on mortar: yes, mortar has impacts on safety, but it’s also pretty far removed until you put it in the context of a full system and very few people are doing something on the order of building a bridge/AGI (at least at this conference) and instead were focused on improvements to algorithms and architectures that they believe are on the path to figuring out how to build the thing at all.
That said, I think all of your suggested actions sound reasonable, because it seems to me now the primary issue may simply be changing the culture in AI/AGI research to have a much stronger safety focus.