Note: I’m not explaining my reasoning in this post, just recording my predictions and sharing how I feel.
I’ll sound like a boring cliche at this point, but I just wanted to say it publicly: my AGI timelines have shorten earlier this year.
Without thinking about too much about quantifying my probabilities, I’d say the probabilities that we’ll get AGI or AI strong enough to prevent AGI (including through omnicide) are:
18% <2033
18% 2033-2043
18% 2043-2053
18% 2050-2070
28% 2070+ or won’t happen
But at this point I feel like not much would surprise me in terms of short timelines. Transformative AI seems really close. Short timelines and AI x-risk concerns are common among people working in AI and among people trying to predict the development of this tech. It’s the first time I’ve been feeling sick to my stomach when thinking about AI timelines. First time that my mind is as focused emotionally on the threat, simulating how the last moments before an AI omnicide would look like.
What fraction of the world would be concerned about AI x-risk 1 second before an AI omnicide? Plausibly very low.
Will people see their death coming? For example, because a drone breaks their house window just before shooting them in the head. And if so, will people be able to say “Ah, Mati was right” just before they die or will they just think it’s a terrorist attack or something like that? I imagine losing access to Internet and cellphone communication, not thinking much of it, while a drone is on its journey to kill me.
Before AI overpowers humanity, will people think that I was wrong because AI is actually providing a crazy amount of wealth? (despite me already thinking this)
Will I have time to post my next AI x-risk fiction story before AI kills us all? I better get to it.
To be clear, this fear is not at all debilitating or otherwise pathological.
(I know some of those thoughts are sily; I’m obviously predominantly concerned about omnicide, not about publishing my fiction or being acknowledged)
I’m feeling wanting and finding myself simplifying my life, doing things faster, and focusing even more on AI. (I still care and support cryonics and cause areas adjacent to AI like genetic engineering.)
In a few years, I might live in a constant state of thinking I could drop dead at any time from an AGI.
I used to think the most likely cause of my death would be an insufficiently good cryopreservation, but now I think it’s misaligned AGI. It seems likely to me that most people alive today will die from an AI omnicide.
✨ topic: AI timelines
Note: I’m not explaining my reasoning in this post, just recording my predictions and sharing how I feel.
I’ll sound like a boring cliche at this point, but I just wanted to say it publicly: my AGI timelines have shorten earlier this year.
Without thinking about too much about quantifying my probabilities, I’d say the probabilities that we’ll get AGI or AI strong enough to prevent AGI (including through omnicide) are:
18% <2033
18% 2033-2043
18% 2043-2053
18% 2050-2070
28% 2070+ or won’t happen
But at this point I feel like not much would surprise me in terms of short timelines. Transformative AI seems really close. Short timelines and AI x-risk concerns are common among people working in AI and among people trying to predict the development of this tech. It’s the first time I’ve been feeling sick to my stomach when thinking about AI timelines. First time that my mind is as focused emotionally on the threat, simulating how the last moments before an AI omnicide would look like.
What fraction of the world would be concerned about AI x-risk 1 second before an AI omnicide? Plausibly very low.
Will people see their death coming? For example, because a drone breaks their house window just before shooting them in the head. And if so, will people be able to say “Ah, Mati was right” just before they die or will they just think it’s a terrorist attack or something like that? I imagine losing access to Internet and cellphone communication, not thinking much of it, while a drone is on its journey to kill me.
Before AI overpowers humanity, will people think that I was wrong because AI is actually providing a crazy amount of wealth? (despite me already thinking this)
Will I have time to post my next AI x-risk fiction story before AI kills us all? I better get to it.
To be clear, this fear is not at all debilitating or otherwise pathological.
(I know some of those thoughts are sily; I’m obviously predominantly concerned about omnicide, not about publishing my fiction or being acknowledged)
I’m feeling wanting and finding myself simplifying my life, doing things faster, and focusing even more on AI. (I still care and support cryonics and cause areas adjacent to AI like genetic engineering.)
In a few years, I might live in a constant state of thinking I could drop dead at any time from an AGI.
I used to think the most likely cause of my death would be an insufficiently good cryopreservation, but now I think it’s misaligned AGI. It seems likely to me that most people alive today will die from an AI omnicide.