I definitely agree that specific examples would make the argument much stronger. At least, it would allow me to understand what kind of “false alarms” are we talking about here: is it mere tech hypes (such as cold fusion), or specifically humanity-destroying events (such as nuclear war)?
I think we didn’t have so many things that threatened to destroy humanity. Maybe it’s just my ignorance speaking, but the nuclear war is the only example that comes to my mind. (Global warming, although possibly a great disaster, is alone not an extinction threat to entire humanity.) And mere tech hypes that didn’t threaten to destroy humanity don’t seem like a relevant category for the AI danger.
Perhaps more importantly, with things like LK99 or cold fusion, the only source of hype was “people enthusiastically writing papers”. With AI, the situation is more like “anyone can use (for free, if it’s only a few times a day) a technology that would be considered sci-fi five years ago”. Like, the controversy is about how far and fast it will get, but there is no doubt that it is already here… and even if somehow magically the state of AI would never improve beyond where it is today, we would still have a few more years of social impact at more people would learn to use it and find new ways how to use it.
EDIT: By “sci-fi” I mean, imagine creating a robotic head that uses speech recognition and synthesis to communicate with humans, uploading the latest LLM into it, and sending it by a time machine five or ten years into the past. Or rather, sending thousands of such robotic heads. People would be totally scared (not just because of the time travel). And finding out that the robotic heads often hallucinate would only calm them down a little.
AI isn’t really new technology though, right? Do you have evidence of alarmists around AI in the past?
And do you have anecdotes of intelligent/rational people being alarmist about a technology that turned out to be false?
I think these pieces of evidence/anecdotes would strengthen your argument.
What is your estimated timeline for humanity’s extinction if it continues on its current path?
What information are you using for the foundation of your beliefs around the progress of science & technology?
I definitely agree that specific examples would make the argument much stronger. At least, it would allow me to understand what kind of “false alarms” are we talking about here: is it mere tech hypes (such as cold fusion), or specifically humanity-destroying events (such as nuclear war)?
I think we didn’t have so many things that threatened to destroy humanity. Maybe it’s just my ignorance speaking, but the nuclear war is the only example that comes to my mind. (Global warming, although possibly a great disaster, is alone not an extinction threat to entire humanity.) And mere tech hypes that didn’t threaten to destroy humanity don’t seem like a relevant category for the AI danger.
Perhaps more importantly, with things like LK99 or cold fusion, the only source of hype was “people enthusiastically writing papers”. With AI, the situation is more like “anyone can use (for free, if it’s only a few times a day) a technology that would be considered sci-fi five years ago”. Like, the controversy is about how far and fast it will get, but there is no doubt that it is already here… and even if somehow magically the state of AI would never improve beyond where it is today, we would still have a few more years of social impact at more people would learn to use it and find new ways how to use it.
EDIT: By “sci-fi” I mean, imagine creating a robotic head that uses speech recognition and synthesis to communicate with humans, uploading the latest LLM into it, and sending it by a time machine five or ten years into the past. Or rather, sending thousands of such robotic heads. People would be totally scared (not just because of the time travel). And finding out that the robotic heads often hallucinate would only calm them down a little.