Other than that there are some parallels to AI boxing, no, I don’t know why they need those.
Oh, sorry for being excessively brief then. SingInst would want to know how likely various form of existential risk are so they know how likely it is that humanity will survive for X years without FAI. There are various trade-offs that would need to be made differently depending on how urgent things are.
That’s also possible (to ensure; they don’t want to break it). I don’t think they can do that much for those problems at the present stage. If they barely have time to publish work on the singularity, it probably isn’t a good idea to spend marginal effort on trying to change bio-security policy.
Oh, sorry for being excessively brief then. SingInst would want to know how likely various form of existential risk are so they know how likely it is that humanity will survive for X years without FAI. There are various trade-offs that would need to be made differently depending on how urgent things are.
Oh. I though they meant experts of how to ensure (or break....) it, not of how good it is in general worldwide.
That’s also possible (to ensure; they don’t want to break it). I don’t think they can do that much for those problems at the present stage. If they barely have time to publish work on the singularity, it probably isn’t a good idea to spend marginal effort on trying to change bio-security policy.