Strong disagree voted. To me this is analogous to saying that, given that Leonardo da Vinci tried to design a flying machine and believed this to be possible, despite not really understanding aerodynamics, that the Wright brothers believing the aeroplane they designed would fly “can’t really be based on those technical details in any deep or meaningful way.”
“Maybe a thing smarter than humans will eventually displace us” is really not a very complicated argument, and no one is claiming it is. So it should be part of our hypothesis class, and various people like Turing thought of it well before modern ML. The “rationally grounded in a technical understanding of today’s deep learning systems” part is about how we update our probabilities of the hypotheses in our hypothesis class, and how we can comfortably say “yes, terrible outcomes still seem plausible”, as they did on priors without needing to look at AI systems at all (my probability is moderately lower than it would have been without looking at AIs at all, but with massive uncertainty)
Intuition and rigour agreeing is not some kind of highly suspicious gotcha
The way I think about it, you should have a prior distribution over doom Vs no doom, and then getting a bunch of info about current ML should update that. In my opinion, it is highly unreasonable to have a very low prior on “thing smarter than humans successfully acts significantly against our interests”, and that you should generally be highly uncertain and view this as high variance
So I guess the question is how many people who think doom is very unlikely just start from a really low prior but agree with me on the empirical updates, or start from some more uncertain prior but update a bunch downwards on empirical evidence or at least reasoning about the world. Like oh, companies are rational enough that they just wouldn’t build something that would be dangerous and it’ll be easy to test for and they’ll do this testing. Historically, we’ve solved issues with technology before they arose so this will be fine. Or we can just turn it off if something gets wrong. I would consider even the notion that there exists the ability to turn it off as using information that someone would not recently have had in the 19th century
My guess is that most reasonable people with low P(doom), who are willing to actually engage with probabilities here, start at at least 5% but just update down a bunch for reasons I tend to disagree with/consider wildly overconfident? But maybe you’re arguing that the disagreement stems now from priors?