OK. Joke aside. From the paper (it is really short) I see that for the safest case (two teams none aware of the other or of their capability and their capability is higher than their enmity) the risk is 0. But this is due to a simplification and only a first order approximation.
Given that we might structure AI development such that AI research must be registered and no communication not thru the AI authority is allowed (OK, that might be circumvented but at least reduces risk) then we may arrive at the zero case above.
But it is not really zero. I’m interested in the exact value as that might still be too high.
Note that I think that the capability e will most likely exceed the enmity $\mu$ because the risk of AI failure is so high.
Aha. That is the reason they failed with Skynet.
OK. Joke aside. From the paper (it is really short) I see that for the safest case (two teams none aware of the other or of their capability and their capability is higher than their enmity) the risk is 0. But this is due to a simplification and only a first order approximation.
Given that we might structure AI development such that AI research must be registered and no communication not thru the AI authority is allowed (OK, that might be circumvented but at least reduces risk) then we may arrive at the zero case above.
But it is not really zero. I’m interested in the exact value as that might still be too high.
Note that I think that the capability e will most likely exceed the enmity $\mu$ because the risk of AI failure is so high.