Worse yet, and probably more common, is having an image of an AI apocalypse, that came from irrational, or distorted sources.
Having a very clear image of an obviously fictional AI apocalypse, which your mind very easily jumps to whenever you hear people talking about X-risks, is often far more thought-limiting than having no preconceived image at all.
This was the main hurdle I had to believing in AI doom—I didn’t have any coherent argument against it, and I found the doomy arguments pretty convincing. But the conclusion just sounded silly. I’d fall back on talking points like “Well in the 1800s, people who believed in sci-fi narratives like you do, thought that electricity would ressurect the dead, and we’d be punished for playing god. You shouldn’t take these paranoias so seriously.”
(This is why I, and several other people I know, intentionally avoid evoking sci-fi-associated imagery when talking about AI)
Worse yet, and probably more common, is having an image of an AI apocalypse, that came from irrational, or distorted sources.
Having a very clear image of an obviously fictional AI apocalypse, which your mind very easily jumps to whenever you hear people talking about X-risks, is often far more thought-limiting than having no preconceived image at all.
This was the main hurdle I had to believing in AI doom—I didn’t have any coherent argument against it, and I found the doomy arguments pretty convincing. But the conclusion just sounded silly.
I’d fall back on talking points like “Well in the 1800s, people who believed in sci-fi narratives like you do, thought that electricity would ressurect the dead, and we’d be punished for playing god. You shouldn’t take these paranoias so seriously.”
(This is why I, and several other people I know, intentionally avoid evoking sci-fi-associated imagery when talking about AI)