I’m currently running a project at the AI Safety Camp with the aim of developing plausible and detailed failure stories. Unlike your idea, we’re not focusing on specific nuts and bolts only, but aim to describe a complete chain of events from (more or less) today to catastrophe.
Other than that, the classic stories come to mind: King Midas, the Sorcerer’s Apprentice, the Golem, etc.
Thanks for the links, Karl. It wasn’t my focus in this post, but I’m also a fan of stories that attempt to map out plausible possible futures, so your project sounds really interesting.
Not exactly a short fable, but definitely a story about alignment: https://www.lesswrong.com/posts/rSiybWzeiG8agYtNr/virtua-a-novel-about-ai-alignment.
You may also be interested in this: https://aiimpacts.org/partially-plausible-fictional-ai-futures/
I’m currently running a project at the AI Safety Camp with the aim of developing plausible and detailed failure stories. Unlike your idea, we’re not focusing on specific nuts and bolts only, but aim to describe a complete chain of events from (more or less) today to catastrophe.
Other than that, the classic stories come to mind: King Midas, the Sorcerer’s Apprentice, the Golem, etc.
Thanks for the links, Karl. It wasn’t my focus in this post, but I’m also a fan of stories that attempt to map out plausible possible futures, so your project sounds really interesting.