Takeover, if misaligned, also counts as doom. X-risk includes permanent disempowerment, not just literal extinction. That’s according to Bostrom, who coined the term:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
A reasonably good outcome might be for ASI to set some guardrails to prevent death and disasters (like other black marbles) and then mostly leave us alone.
My understanding is that Neuralink is a bet on “cyborgism”. It doesn’t look like it will make it in time. Cyborgs won’t be able to keep up with pure machine intelligence once it begins to take off, but maybe smarter humans would have a better chance of figuring out alignment before it starts. Even purely biological intelligence enhancement (e.g., embryo selection) might help, but that might not be any faster.
Takeover, if misaligned, also counts as doom. X-risk includes permanent disempowerment, not just literal extinction. That’s according to Bostrom, who coined the term:
A reasonably good outcome might be for ASI to set some guardrails to prevent death and disasters (like other black marbles) and then mostly leave us alone.
My understanding is that Neuralink is a bet on “cyborgism”. It doesn’t look like it will make it in time. Cyborgs won’t be able to keep up with pure machine intelligence once it begins to take off, but maybe smarter humans would have a better chance of figuring out alignment before it starts. Even purely biological intelligence enhancement (e.g., embryo selection) might help, but that might not be any faster.