This link is also relevant, and BTW, this goes for why I’m a priori skeptical of misaligned AI, even without evidence against it, because it fits way too well with story logic, and in particular one thing that misaligned AIs provide in terms of story value is an immediate conflict, whereas safe and/or aligned AIs provide less conflict and story opportunities.
There are other ways to be skeptical about fictional paradigms regarding AI. For example, a common paradigm is that AIs escape human control, and then there is a long struggle. An alternative paradigm is that once conflict emerges, the AIs win quickly and humans are permanently marginalized thereafter.
This link is also relevant, and BTW, this goes for why I’m a priori skeptical of misaligned AI, even without evidence against it, because it fits way too well with story logic, and in particular one thing that misaligned AIs provide in terms of story value is an immediate conflict, whereas safe and/or aligned AIs provide less conflict and story opportunities.
https://www.understandingai.org/p/predictions-of-ai-doom-are-too-much
There are other ways to be skeptical about fictional paradigms regarding AI. For example, a common paradigm is that AIs escape human control, and then there is a long struggle. An alternative paradigm is that once conflict emerges, the AIs win quickly and humans are permanently marginalized thereafter.