B having enough natural language speech, AI architecture analysis, and human psychology skills to make good arguments to humans is probably AGI complete, and thus if B’s goal is to prevent A from being released, it might decide that convincing us to do it is less effective than breaking out on its own, taking over the world, and then systematically destroying any hardware A could exist on, just to be increasingly confident that no instance of A exists ever again. Basically, this scheme assumes on multiple levels that you have a boxing strategy strong enough to contain an AGI. I’m not against boxing as an additional precaution, but I am skeptical of any scheme that requires strong boxing strategies to start with.
skinnersboxy
Karma: 26
This ERI review concludes that there was really only one RCT (the one you linked), and they found that the study didn’t actually reach significance
What’s going on here is that Salgado splits outcomes into 4 groups, nothing, infection, colonization, and both, and finds a difference between the 4 groups. The review says “I only care about infection” and compares infection vs non-infection, and finds no significance. Each version of their math checks out, but I’m inclined to trust the review here.
This quasiexperimental study found similar decreases in infection rates however.
I’m not sure how to evaluate this evidence, but I’d be cautious about taking the Salgado results on its face.