Are...are you seriously advocating blowing up all computer manufacturing facilities? All of them around the world? A single government doing this, acting unilaterally? Because, uh, not to be dramatic or anything, but that’s a really bad idea.
First of all, from an outside view perspective, blowing up buildings which presumably have people inside them is generally considered terrorism.
Second of all, a singular government blowing up buildings which are owned by (and in the territory of) other governments is legally considered an act of war. Doing this to every government in the world is therefore definitionally a world war. A World War III would almost certainly be an x-risk event, with higher probability of disaster than I’d expect Yudkowsky would give on just taking our chances on AGI.
Third of all, even if a government blew up every last manufacturing facility, that government would have to effectively remain in control of the entire world for as long as it takes to solve alignment. Considering that this government just instigated an unprovoked attack on every single nation in existence, I place very slim odds on that happening. And even if they did by some miracle succeed, whoever instigated the attack will have burned any and all goodwill at that point, leading to an environment I highly doubt would be conducive to alignment research.
So am I just misunderstanding you, or did you just say what I thought you said?
A World War III would not “almost certainly be an x-risk event” though.
Nuclear winter wouldn’t do it. Not actual extinction. We don’t have anything now that would do it.
The question was “convince me that humanity isn’t DOOMED” not “convince me that there is a totally legal and ethical path to preventing AI driven extinction”
I interpreted doomed as a 0 percent probability of survival. But I think there is a non-zero chance of humanity never making Super-humanly Intelligent AGI, even if we persist for millions of years.
The longer it takes to make Super-AGI, the greater our chances of survival because society is getting better and better at controlling rouge actors as the generations pass and I think that trend is likely to continue.
We worry that tech will allow someone to make a world ending device in their basement someday, but it could also allow us to monitor every person and their basement with (narrow) AI and/or Subhuman AGI every moment, so well that the possibility of someone getting away with making Super-AGI or any other crime may someday seem absurd.
One day, the monitoring could be right in our brains. Mental illness could also be a thing of the past, and education about AGI related dangers could be universal. Humans could also decide not to increase in number, so as to minimize risk and maximize resources available to each immortal member in society.
I am not recommending any particular action right now, I am saying we are not 100% doomed by AGI progress to be killed or become pets, etc.
Are...are you seriously advocating blowing up all computer manufacturing facilities? All of them around the world? A single government doing this, acting unilaterally? Because, uh, not to be dramatic or anything, but that’s a really bad idea.
First of all, from an outside view perspective, blowing up buildings which presumably have people inside them is generally considered terrorism.
Second of all, a singular government blowing up buildings which are owned by (and in the territory of) other governments is legally considered an act of war. Doing this to every government in the world is therefore definitionally a world war. A World War III would almost certainly be an x-risk event, with higher probability of disaster than I’d expect Yudkowsky would give on just taking our chances on AGI.
Third of all, even if a government blew up every last manufacturing facility, that government would have to effectively remain in control of the entire world for as long as it takes to solve alignment. Considering that this government just instigated an unprovoked attack on every single nation in existence, I place very slim odds on that happening. And even if they did by some miracle succeed, whoever instigated the attack will have burned any and all goodwill at that point, leading to an environment I highly doubt would be conducive to alignment research.
So am I just misunderstanding you, or did you just say what I thought you said?
A World War III would not “almost certainly be an x-risk event” though.
Nuclear winter wouldn’t do it. Not actual extinction. We don’t have anything now that would do it.
The question was “convince me that humanity isn’t DOOMED” not “convince me that there is a totally legal and ethical path to preventing AI driven extinction”
I interpreted doomed as a 0 percent probability of survival. But I think there is a non-zero chance of humanity never making Super-humanly Intelligent AGI, even if we persist for millions of years.
The longer it takes to make Super-AGI, the greater our chances of survival because society is getting better and better at controlling rouge actors as the generations pass and I think that trend is likely to continue.
We worry that tech will allow someone to make a world ending device in their basement someday, but it could also allow us to monitor every person and their basement with (narrow) AI and/or Subhuman AGI every moment, so well that the possibility of someone getting away with making Super-AGI or any other crime may someday seem absurd.
One day, the monitoring could be right in our brains. Mental illness could also be a thing of the past, and education about AGI related dangers could be universal. Humans could also decide not to increase in number, so as to minimize risk and maximize resources available to each immortal member in society.
I am not recommending any particular action right now, I am saying we are not 100% doomed by AGI progress to be killed or become pets, etc.
Various possibilities exist.