Unfortunately, until the world has acted to patch up some terrible security holes in society, we are all in a very fragile state.
Agreed.
I have been working on AI Biorisk Evals with SecureBio for nearly a year now.
I appreciate that. I also really like the NAO project, which also a SecureBio thing. Good work y’all!
As models increase in general capabilities, so too do they incidentally get more competent at assisting with the creation of bioweapons. It is my professional opinion that they are currently providing non-zero uplift over a baseline of ‘bad actor with web search, including open-access scientific papers’.
Yeah, if your threat model is “AI can help people do more things, including bad things” that is a valid threat model and seems correct to me. That said, my world model has a giant gaping hole where one would expect an explanation for why “people can do lots of things” hasn’t already led to a catastrophe (it’s not like the bio risk threat model needs AGI assistance, a couple undergrad courses and some lab experience reproducing papers should be quite sufficient).
In any case, I don’t think RLHF makes this problem actively worse, and it could plausibly help a bit though obviously the help is of the form “adds a trivial inconvenience to destroying the world”.
Some people don’t believe the we will get to Powerful AI Agents before we’ve already arrived at other world states that make it unlikely we will continue to proceed on a trajectory towards Powerful AI Agents.
If you replace “a trajectory towards powerful AI agents” with “a trajectory towards powerful AI agents that was foreseen in 2024 and could be meaningfully changed in predictable ways by people in 2024 using information that exists in 2024″ that’s basically my position.
Agreed.
I appreciate that. I also really like the NAO project, which also a SecureBio thing. Good work y’all!
Yeah, if your threat model is “AI can help people do more things, including bad things” that is a valid threat model and seems correct to me. That said, my world model has a giant gaping hole where one would expect an explanation for why “people can do lots of things” hasn’t already led to a catastrophe (it’s not like the bio risk threat model needs AGI assistance, a couple undergrad courses and some lab experience reproducing papers should be quite sufficient).
In any case, I don’t think RLHF makes this problem actively worse, and it could plausibly help a bit though obviously the help is of the form “adds a trivial inconvenience to destroying the world”.
If you replace “a trajectory towards powerful AI agents” with “a trajectory towards powerful AI agents that was foreseen in 2024 and could be meaningfully changed in predictable ways by people in 2024 using information that exists in 2024″ that’s basically my position.