I think all the three estimates mentioned there correspond to marginal probabilities (rather than probabilities conditioned on “no governance interventions”). So those estimates already account for scenarios in which governance interventions save the world. Therefore, it seems we should not strongly update against the necessity of governance interventions due to those estimates being optimistic
I normally give ~50% as my probability we’d be fine without any kind of coordination.
Upvoted for giving this number, but what does it mean exactly? You expect “50% fine” through all kinds of x-risk, assuming no coordination from now until the end of the universe? Or just assuming no coordination until AGI? Is it just AI risk instead of all x-risk, or just risk from narrow AI alignment? If “AI risk”, are you including risks from AI exacerbating human safety problems, or AI differentially accelerating dangerous technologies? Is it 50% probability that humanity survives (which might be “fine” to some people) or 50% that we end up with a nearly optimal universe? Do you have a document that gives all of your quantitative risk estimates with clear explanations of what they mean?
(Sorry to put you on the spot here when I haven’t produced anything like that myself, but I just want to convey how confusing all this is.)
I normally give ~50% as my probability we’d be fine without any kind of coordination.
Upvoted for giving this number, but what does it mean exactly? You expect “50% fine” through all kinds of x-risk, assuming no coordination from now until the end of the universe? Or just assuming no coordination until AGI? Is it just AI risk instead of all x-risk, or just risk from narrow AI alignment? If “AI risk”, are you including risks from AI exacerbating human safety problems, or AI differentially accelerating dangerous technologies? Is it 50% probability that humanity survives (which might be “fine” to some people) or 50% that we end up with a nearly optimal universe? Do you have a document that gives all of your quantitative risk estimates with clear explanations of what they mean?
(Sorry to put you on the spot here when I haven’t produced anything like that myself, but I just want to convey how confusing all this is.)