I mostly agree and have strongly upvoted. However, I have one small but important nitpick about this sentense:
The risks of imminent harmful action by Sydney are negligible.
I think when it comes to x-risk, the correct question is not “what is the probability that this will result in existential catastrophe”.
Suppose that there is a series of potential harmful any increasingly risky AIs that each have some probabilities p1,p2,… of causing existiential catastrophe unless you press a stop button. If the probabilities are growing sufficiently slowly, then existential catastrophe will most likely happen for an n where pn is still low. A better question to ask is “what was the probability of existential catastrophe happening for som i≤n.”
I mostly agree and have strongly upvoted. However, I have one small but important nitpick about this sentense:
I think when it comes to x-risk, the correct question is not “what is the probability that this will result in existential catastrophe”. Suppose that there is a series of potential harmful any increasingly risky AIs that each have some probabilities p1,p2,… of causing existiential catastrophe unless you press a stop button. If the probabilities are growing sufficiently slowly, then existential catastrophe will most likely happen for an n where pn is still low. A better question to ask is “what was the probability of existential catastrophe happening for som i≤n.”