A World War III would not “almost certainly be an x-risk event” though.
Nuclear winter wouldn’t do it. Not actual extinction. We don’t have anything now that would do it.
The question was “convince me that humanity isn’t DOOMED” not “convince me that there is a totally legal and ethical path to preventing AI driven extinction”
I interpreted doomed as a 0 percent probability of survival. But I think there is a non-zero chance of humanity never making Super-humanly Intelligent AGI, even if we persist for millions of years.
The longer it takes to make Super-AGI, the greater our chances of survival because society is getting better and better at controlling rouge actors as the generations pass and I think that trend is likely to continue.
We worry that tech will allow someone to make a world ending device in their basement someday, but it could also allow us to monitor every person and their basement with (narrow) AI and/or Subhuman AGI every moment, so well that the possibility of someone getting away with making Super-AGI or any other crime may someday seem absurd.
One day, the monitoring could be right in our brains. Mental illness could also be a thing of the past, and education about AGI related dangers could be universal. Humans could also decide not to increase in number, so as to minimize risk and maximize resources available to each immortal member in society.
I am not recommending any particular action right now, I am saying we are not 100% doomed by AGI progress to be killed or become pets, etc.
A World War III would not “almost certainly be an x-risk event” though.
Nuclear winter wouldn’t do it. Not actual extinction. We don’t have anything now that would do it.
The question was “convince me that humanity isn’t DOOMED” not “convince me that there is a totally legal and ethical path to preventing AI driven extinction”
I interpreted doomed as a 0 percent probability of survival. But I think there is a non-zero chance of humanity never making Super-humanly Intelligent AGI, even if we persist for millions of years.
The longer it takes to make Super-AGI, the greater our chances of survival because society is getting better and better at controlling rouge actors as the generations pass and I think that trend is likely to continue.
We worry that tech will allow someone to make a world ending device in their basement someday, but it could also allow us to monitor every person and their basement with (narrow) AI and/or Subhuman AGI every moment, so well that the possibility of someone getting away with making Super-AGI or any other crime may someday seem absurd.
One day, the monitoring could be right in our brains. Mental illness could also be a thing of the past, and education about AGI related dangers could be universal. Humans could also decide not to increase in number, so as to minimize risk and maximize resources available to each immortal member in society.
I am not recommending any particular action right now, I am saying we are not 100% doomed by AGI progress to be killed or become pets, etc.
Various possibilities exist.