people in biosecurity have been discussing the risk of such things, yes. it is thankfully not quite that simple, yet. but it is pretty much the ai risk model. all models where an ai tries to destroy humanity involve, at some point, an ai or ai-human pair deciding to do an advanced enough version of this to actually make it work. if humanity is to survive we need to pretty much entirely solve biosecurity.
people in biosecurity have been discussing the risk of such things, yes. it is thankfully not quite that simple, yet. but it is pretty much the ai risk model. all models where an ai tries to destroy humanity involve, at some point, an ai or ai-human pair deciding to do an advanced enough version of this to actually make it work. if humanity is to survive we need to pretty much entirely solve biosecurity.