1. It might be the case that these organizations already have security procedures, but I’d expect these procedures to be somewhat ad-hoc, particularly for the more recently formed organizations. If they’re not, I’ll just be pleasantly surprised. Also, how to say, I could also imagine Latacora having more optimization power in the security dimension than, say, MIRI.
2. I imagine that explaining the security profile to them might be fun.
3. I can imagine that as Latacora has grown larger, their proportion of junior to senior people might have changed. It seems to me that AI Safety orgs would want to bid for the more senior people, rather than for the more recent hires.
4. I imagine that Latacora’s job might be greatly facilitated by:
AI safety orgs not needing to appease by bureaucratic requirements (such as security certifications)
AI Safety orgs not literally expecting AGI this year, thus giving time to prepare (unlike bureaucratic requirements or business deadlines)
5. I also imagine that asking for a security team such as Latacora to be integrated into, e.g., DeepMind, is a nice specific ask which people with short timelines might want to push for.
It seems like MIRI already had a very strong security policy that strongly inhibited their ability to do their job. By hiring professionals like Latacora, they might not only make MIRI more secure but might also provide helpful advice about what practices are creating an unnecessary burden.
I can imagine situations where having people “on call”, and “on site” provide different levels of security, but you probably have more insight. I.e., DeepMind’s ability to call on a Google security team after a breach after the fact doesn’t provide that much security.
I can imagine setups where Google’s security people are already integrated into DeepMind, but I can also imagine setups where DeepMind has a few really top security people and that still doesn’t provide a paranoid enough level of security.
As part of being inside Google DeepMind I would expect that this gives DeepMind already good access to security expertise. If you think that an external service like Latacora can do things that internal Google services can’t, can you expand the argument of why you think so?
Some more thoughts:
1. It might be the case that these organizations already have security procedures, but I’d expect these procedures to be somewhat ad-hoc, particularly for the more recently formed organizations. If they’re not, I’ll just be pleasantly surprised. Also, how to say, I could also imagine Latacora having more optimization power in the security dimension than, say, MIRI.
2. I imagine that explaining the security profile to them might be fun.
3. I can imagine that as Latacora has grown larger, their proportion of junior to senior people might have changed. It seems to me that AI Safety orgs would want to bid for the more senior people, rather than for the more recent hires.
4. I imagine that Latacora’s job might be greatly facilitated by:
AI safety orgs not needing to appease by bureaucratic requirements (such as security certifications)
AI Safety orgs not literally expecting AGI this year, thus giving time to prepare (unlike bureaucratic requirements or business deadlines)
5. I also imagine that asking for a security team such as Latacora to be integrated into, e.g., DeepMind, is a nice specific ask which people with short timelines might want to push for.
It seems like MIRI already had a very strong security policy that strongly inhibited their ability to do their job. By hiring professionals like Latacora, they might not only make MIRI more secure but might also provide helpful advice about what practices are creating an unnecessary burden.
I had similar thoughts.
Deepmind specifically has Google’s security people on call, which is to say the best that money can buy. For others, well, AI Safety Needs Great Engineers and Anthropic is hiring, including for security.
(opinions my own, you know the drill)
I can imagine situations where having people “on call”, and “on site” provide different levels of security, but you probably have more insight. I.e., DeepMind’s ability to call on a Google security team after a breach after the fact doesn’t provide that much security.
I can imagine setups where Google’s security people are already integrated into DeepMind, but I can also imagine setups where DeepMind has a few really top security people and that still doesn’t provide a paranoid enough level of security.
As part of being inside Google DeepMind I would expect that this gives DeepMind already good access to security expertise. If you think that an external service like Latacora can do things that internal Google services can’t, can you expand the argument of why you think so?
I don’t think that Latacora can do things that an internal Google service literally can’t.