Epistemic Status: High degree of uncertainty, thanks to AI timeline prediction and unknowns such as unforeseen technologies and power of highly developed AI.
My Existential Risk (ER) probability mass is almost entirely formed from the risk of unfriendly Artificial Super Intelligence (ASI) and so is heavily influenced my predicted AI timelines. (I think AGI is most likely to occur around 2030 +-5 years, and will be followed within 0-4 years of ASI, with a singularity soon after that, see my AI timelines post: https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines?commentId=zaWhEdteBG63nkQ3Z ).
I do not think any conventional threat such as nuclear war, super pandemic or climate change is likely to be an ER, and super volcanoes or asteroid impacts are very unlikely. I think this century is unique and will constitute 99% of the bulk of ER, with the last <1% being from more unusual threats such as simulation being turned off, false vacuum collapse, or hostile alien ASI. But also, for unforeseen or unimagined threats.
I think the most likely decade for the creation of ASI will be the 30’s, with an 8% ER chance (From not being able to solve the control problem or coordinate to implement it even if solved).
Considering AI timeline uncertainty as well as how long an ASI takes to acquire techniques or technologies necessary to wipe out or lock out humanity I think an 11% ER chance for the 40’s. Part of the reason this is higher than the 30’s ER estimate is to accommodate the possibility of a delayed treacherous turn.
Once past the 50’s I think we will be out of most of the danger (only 6% for the rest of the century), and potential remaining ER’s such as runaway nanotech or biotech will not be a very large risk as ASI would be in firm control of civilisation by then. Even then though some danger remains for the rest of the century from unforeseen black ball technologies, however interstellar civilisational spread (ASI high percent of speed of light probes) by early next century should have reduced nearly all threats to less than ERs.
So overall I think the 21st Century will pose a 25.6% chance of ER. See the Elicit post for the individual decade breakdowns.
Note: I made this prediction before looking at the Effective Altruism Database of Existential Risk Estimates.
(Minor & meta: I’d suggest people take screenshots which include the credence on “More than 2120-01-01” on the right, as I think that’s a quite important part of one’s prediction. But of course, readers can still find that part of your prediction by reading your comment or clicking the link—it’s just not highlighted as immediately.)
I do not think any conventional threat such as nuclear war, super pandemic or climate change is likely to be an ER
Are you including risks from advanced biotechnology in that category? To me, it would seem odd to call that a “conventional threat”; that category sounds to me like it would refer to things we have a decent amount of understanding of and experience with. (Really this is more of a spectrum, and our understanding of and experience with risks from nuclear war and climate change is of course limited in key ways as well. But I’d say it’s notably less limited than is the case with advanced biotech or advanced AI.)
with the last <1% being from more unusual threats such as simulation being turned off, false vacuum collapse, or hostile alien ASI. But also, for unforeseen or unimagined threats.
It appears to me that there are some important risks that have been foreseen and imagined which you’re not accounting for. Let me know if you want me to say more; I hesitate merely because I’m wary of pulling independent views towards community views in a thread like this, not for infohazard reasons (the things I have in mind are widely discussed and non-exotic).
Note: I made this prediction before looking at the Effective Altruism Database of Existential Risk Estimates.
I think it’s cool that you made this explicit, to inform how and how much people update on your views if they’ve already updated on views in that database :)
I’m not including advanced biotech in my conventional threat category; I really should have elaborated more on what I meant: Conventional risks are events that already have a background chance of happening (as of 2020 or so) and does not include future technologies.
I make the distinction because I think that we don’t have enough time left before ASI to develop such advanced tech ourselves, so as an ASI would be overseeing their development and deployment, which reduces their threat massively I think (Even if used by a rouge AI I would say the ER was from the AI not the tech). And that time limit goes not just for tech development but also runaway optimisation processes and societal forces (IE in-optimal value lock in), as a friendly ASI should have enough power to bring them to heel.
My list of threats wasn’t all inclusive, I paid lip service to some advanced tech and some of the more unusual scenarios, but generally I just thought past ASI nearly nothing would pose a real threat so didn’t focus on it. I am going read through the database of existential threats though, does it include what you were referring too? (“important risks that have been foreseen and imagined which you’re not accounting for”).
Conventional risks are events that already have a background chance of happening (as of 2020 or so) and does not include future technologies.
Yeah, that aligns with how I’d interpret the term. I asked about advanced biotech because I noticed it was absent from your answer unless it was included in “super pandemic”, so I was wondering whether you were counting it as a conventional risk (which seemed odd) or excluding it from your analysis (which also seems odd to me, personally, but at least now I understand your short-AI-timelines-based reasoning for that!).
I am going read through the database of existential threats though, does it include what you were referring too?
Yeah, I think all the things I’d consider most important are in there. Or at least “most”—I’d have to think for longer in order to be sure about “all”.
There are scenarios that I think aren’t explicitly addressed in any estimates that database, like things to do with whole-brain emulation or brain-computer interfaces, but these are arguably covered by other estimates. (I also don’t have a strong view on how important WBE or BCI scenarios are.)
Elicit prediction: https://elicit.ought.org/builder/0n64Yv2BE
Epistemic Status: High degree of uncertainty, thanks to AI timeline prediction and unknowns such as unforeseen technologies and power of highly developed AI.
My Existential Risk (ER) probability mass is almost entirely formed from the risk of unfriendly Artificial Super Intelligence (ASI) and so is heavily influenced my predicted AI timelines. (I think AGI is most likely to occur around 2030 +-5 years, and will be followed within 0-4 years of ASI, with a singularity soon after that, see my AI timelines post: https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines?commentId=zaWhEdteBG63nkQ3Z ).
I do not think any conventional threat such as nuclear war, super pandemic or climate change is likely to be an ER, and super volcanoes or asteroid impacts are very unlikely. I think this century is unique and will constitute 99% of the bulk of ER, with the last <1% being from more unusual threats such as simulation being turned off, false vacuum collapse, or hostile alien ASI. But also, for unforeseen or unimagined threats.
I think the most likely decade for the creation of ASI will be the 30’s, with an 8% ER chance (From not being able to solve the control problem or coordinate to implement it even if solved).
Considering AI timeline uncertainty as well as how long an ASI takes to acquire techniques or technologies necessary to wipe out or lock out humanity I think an 11% ER chance for the 40’s. Part of the reason this is higher than the 30’s ER estimate is to accommodate the possibility of a delayed treacherous turn.
Once past the 50’s I think we will be out of most of the danger (only 6% for the rest of the century), and potential remaining ER’s such as runaway nanotech or biotech will not be a very large risk as ASI would be in firm control of civilisation by then. Even then though some danger remains for the rest of the century from unforeseen black ball technologies, however interstellar civilisational spread (ASI high percent of speed of light probes) by early next century should have reduced nearly all threats to less than ERs.
So overall I think the 21st Century will pose a 25.6% chance of ER. See the Elicit post for the individual decade breakdowns.
Note: I made this prediction before looking at the Effective Altruism Database of Existential Risk Estimates.
(Minor & meta: I’d suggest people take screenshots which include the credence on “More than 2120-01-01” on the right, as I think that’s a quite important part of one’s prediction. But of course, readers can still find that part of your prediction by reading your comment or clicking the link—it’s just not highlighted as immediately.)
Are you including risks from advanced biotechnology in that category? To me, it would seem odd to call that a “conventional threat”; that category sounds to me like it would refer to things we have a decent amount of understanding of and experience with. (Really this is more of a spectrum, and our understanding of and experience with risks from nuclear war and climate change is of course limited in key ways as well. But I’d say it’s notably less limited than is the case with advanced biotech or advanced AI.)
It appears to me that there are some important risks that have been foreseen and imagined which you’re not accounting for. Let me know if you want me to say more; I hesitate merely because I’m wary of pulling independent views towards community views in a thread like this, not for infohazard reasons (the things I have in mind are widely discussed and non-exotic).
I think it’s cool that you made this explicit, to inform how and how much people update on your views if they’ve already updated on views in that database :)
I’m not including advanced biotech in my conventional threat category; I really should have elaborated more on what I meant: Conventional risks are events that already have a background chance of happening (as of 2020 or so) and does not include future technologies.
I make the distinction because I think that we don’t have enough time left before ASI to develop such advanced tech ourselves, so as an ASI would be overseeing their development and deployment, which reduces their threat massively I think (Even if used by a rouge AI I would say the ER was from the AI not the tech). And that time limit goes not just for tech development but also runaway optimisation processes and societal forces (IE in-optimal value lock in), as a friendly ASI should have enough power to bring them to heel.
My list of threats wasn’t all inclusive, I paid lip service to some advanced tech and some of the more unusual scenarios, but generally I just thought past ASI nearly nothing would pose a real threat so didn’t focus on it. I am going read through the database of existential threats though, does it include what you were referring too? (“important risks that have been foreseen and imagined which you’re not accounting for”).
Thanks for the feedback :)
Yeah, that aligns with how I’d interpret the term. I asked about advanced biotech because I noticed it was absent from your answer unless it was included in “super pandemic”, so I was wondering whether you were counting it as a conventional risk (which seemed odd) or excluding it from your analysis (which also seems odd to me, personally, but at least now I understand your short-AI-timelines-based reasoning for that!).
Yeah, I think all the things I’d consider most important are in there. Or at least “most”—I’d have to think for longer in order to be sure about “all”.
There are scenarios that I think aren’t explicitly addressed in any estimates that database, like things to do with whole-brain emulation or brain-computer interfaces, but these are arguably covered by other estimates. (I also don’t have a strong view on how important WBE or BCI scenarios are.)