“Let’s give the model/virus the tools it needs to cause massive harm and see how it does! We’ll learn a lot from seeing what it does!”
Am I wrong in thinking this whole testing procedure is extremely risky? This seems like the AI equivalent of gain of function research on biological viruses.
“Let’s give the model/virus the tools it needs to cause massive harm and see how it does! We’ll learn a lot from seeing what it does!”
Am I wrong in thinking this whole testing procedure is extremely risky? This seems like the AI equivalent of gain of function research on biological viruses.