Along with p(doom), perhaps we should talk about p(takeover) - where this is the probability that creation of AI leads to the end of human control over human affairs. I am not sure about doom, but I strongly expect superhuman AI to have the final say in everything.
(I am uncertain of the prospects for any human to keep up via “cyborgism”, a path which could escape the dichotomy of humans in control vs humans not in control.)
Takeover, if misaligned, also counts as doom. X-risk includes permanent disempowerment, not just literal extinction. That’s according to Bostrom, who coined the term:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
A reasonably good outcome might be for ASI to set some guardrails to prevent death and disasters (like other black marbles) and then mostly leave us alone.
My understanding is that Neuralink is a bet on “cyborgism”. It doesn’t look like it will make it in time. Cyborgs won’t be able to keep up with pure machine intelligence once it begins to take off, but maybe smarter humans would have a better chance of figuring out alignment before it starts. Even purely biological intelligence enhancement (e.g., embryo selection) might help, but that might not be any faster.
Along with p(doom), perhaps we should talk about p(takeover) - where this is the probability that creation of AI leads to the end of human control over human affairs. I am not sure about doom, but I strongly expect superhuman AI to have the final say in everything.
(I am uncertain of the prospects for any human to keep up via “cyborgism”, a path which could escape the dichotomy of humans in control vs humans not in control.)
Takeover, if misaligned, also counts as doom. X-risk includes permanent disempowerment, not just literal extinction. That’s according to Bostrom, who coined the term:
A reasonably good outcome might be for ASI to set some guardrails to prevent death and disasters (like other black marbles) and then mostly leave us alone.
My understanding is that Neuralink is a bet on “cyborgism”. It doesn’t look like it will make it in time. Cyborgs won’t be able to keep up with pure machine intelligence once it begins to take off, but maybe smarter humans would have a better chance of figuring out alignment before it starts. Even purely biological intelligence enhancement (e.g., embryo selection) might help, but that might not be any faster.