One thing to consider is that the argument for AI Risk covers a lot of probability-space. Counterarguments can only remove small portions of probability-space by ruling out specific subsets of X. Additionally, as you point out, many counterarguments seem to overlap and ruling out each particular version of a counterargument against a subset of X can not increase the probability-space of AI Risk more than the independent probability-space that subset of X accounts for. The existence of weak counterarguments also does not mean that the probability-space they belong to is not vulnerable to a stronger counterargument. P(AI Risk | ~Counterargument-1) may equal P(AI Risk | ~Counterargument-1 AND ~Counterargument-2) but be greater than P(AI Risk | Strong-counterargument-X)
I think you’ve mostly accounted for this by only considering well-thought-out counterarguments that are more likely to be independent and strong. I am not confident enough in my ability to predict how likely it is for strong counterarguments to be found to use this as evidence for AI Risk.
I think it’s more straightforward when the situation is compared to the “No AI Risk” default position that lays out a fairly straightforward progression of technology and human development in harmony: In this case AI Risk is the counterargument, and it’s very convincing to me. Counter-counterarguments against AI Risk are often just “but of course AI will work out okay, that’s what our original argument said”. A convincing argument for No AI Risk would have to lay out a highly probable scenario for the harmonious future development of humans and machines into the indefinite future with no intervention in the development of AI. That seems like a very tall order after reading about AI Risk.
One thing to consider is that the argument for AI Risk covers a lot of probability-space. Counterarguments can only remove small portions of probability-space by ruling out specific subsets of X. Additionally, as you point out, many counterarguments seem to overlap and ruling out each particular version of a counterargument against a subset of X can not increase the probability-space of AI Risk more than the independent probability-space that subset of X accounts for. The existence of weak counterarguments also does not mean that the probability-space they belong to is not vulnerable to a stronger counterargument. P(AI Risk | ~Counterargument-1) may equal P(AI Risk | ~Counterargument-1 AND ~Counterargument-2) but be greater than P(AI Risk | Strong-counterargument-X)
I think you’ve mostly accounted for this by only considering well-thought-out counterarguments that are more likely to be independent and strong. I am not confident enough in my ability to predict how likely it is for strong counterarguments to be found to use this as evidence for AI Risk.
I think it’s more straightforward when the situation is compared to the “No AI Risk” default position that lays out a fairly straightforward progression of technology and human development in harmony: In this case AI Risk is the counterargument, and it’s very convincing to me. Counter-counterarguments against AI Risk are often just “but of course AI will work out okay, that’s what our original argument said”. A convincing argument for No AI Risk would have to lay out a highly probable scenario for the harmonious future development of humans and machines into the indefinite future with no intervention in the development of AI. That seems like a very tall order after reading about AI Risk.