First, on the meta-level, are you aware that lesswrong supports questions? You might get a more positive reception with: “It seems to me that AI doom arguments miss x. What am I missing?”
While it’s possible that dozens of researchers have missed x for a decade or so, it’s rather more likely that you’re missing something. (initially, at least)
On the object level, you seem to be making at least two mistakes:
x is the current mechanism supporting y, therefore x is necessary for y. (false in general, false for most x when y is [the most powerful AI around])
No case of [humans are still around supporting AI] counts as an x-risk. (also false: humans as slaves/livestock...)
Further “Except this narrative overlooks one crucial point” seems the wrong emphasis: effective criticism needs to get at the best models underlying the narrative.
Currently you seem to be:
Looking at the narrative.
Inferring an underlying model.
Critiquing the underlying model you’ve inferred.
That’s fine, but if you’re not working hard to ensure you’ve inferred the correct underlying model, you should expect your critique to miss the point. That too is fine: you usually won’t have time to get the model right—but it’s important to acknowledge. Using questions is one way. Another is a bunch of “it seems to me that...”. NB this isn’t about deference—just acknowledgement that others have quite a bit of information you don’t (initially).
First, on the meta-level, are you aware that lesswrong supports questions?
You might get a more positive reception with: “It seems to me that AI doom arguments miss x. What am I missing?”
While it’s possible that dozens of researchers have missed x for a decade or so, it’s rather more likely that you’re missing something. (initially, at least)
On the object level, you seem to be making at least two mistakes:
x is the current mechanism supporting y, therefore x is necessary for y. (false in general, false for most x when y is [the most powerful AI around])
No case of [humans are still around supporting AI] counts as an x-risk. (also false: humans as slaves/livestock...)
Further “Except this narrative overlooks one crucial point” seems the wrong emphasis: effective criticism needs to get at the best models underlying the narrative.
Currently you seem to be:
Looking at the narrative.
Inferring an underlying model.
Critiquing the underlying model you’ve inferred.
That’s fine, but if you’re not working hard to ensure you’ve inferred the correct underlying model, you should expect your critique to miss the point. That too is fine: you usually won’t have time to get the model right—but it’s important to acknowledge. Using questions is one way. Another is a bunch of “it seems to me that...”.
NB this isn’t about deference—just acknowledgement that others have quite a bit of information you don’t (initially).