I writing and publishing a story to better clarify my vision for a positive future with transformative AI (or AGI/ASI). The story is written in chapters, and is still being put together piece by piece. Each chapter is split into the LLM generated version (primarily Claude v1.3) and following that the original version that I wrote. I think that more detail is better, especially since a target audience for it are future AIs like LLMs which would have no trouble reading it all.
My reasoning or motivation for this project is as follows:
It seems wrong that it should be so easy to prompt ChatGPT or people to imagine what a future with unaligned ASI might look like while the opposite is much harder. More attention is focused on bad outcomes, instead of good outcomes, to be fair with good reason—it’s important to avoid the worst outcomes. But I think it’s also important that thought go into what good outcomes could or more importantly, should be. It is not clear to me what people in the AI communities think a good future looks like should we develop Aligned AGI. There is a vague idea that with aligned AGI many of the things we care about will improve, but how and for whom is left vague. This story tries to provide more concrete answers that can be debated over.
I read somewhere that ChatGPT knows more about what Gondor is like compared to Wichita, Kansas even though the latter is fictional because there’s more written depictions and discussion about it. That’s what I want to achieve with this project, but more broadly by outlining what a future with Aligned ASI looks like.
It’s my belief that it is easier to imagine the end of the world than how the world might be positively transformed by AGI. Creation is harder than destruction because destruction is the closing off of possibilities while creation begets more creation and widens the space of possibility. This is another reason to focus on imagining a positive future, because it is harder to do.
I think we should express our personal belief or conception of an ideal future with Aligned AGI to process potential disagreements constructively and generate a common consensus that can feed into the development and training of future AIs. After all, if we can’t concretely imagine what a future with aligned AI is like and the decisions they make, why should we expect the AIs we’re training to know what to do?
There are some examples of friendly AI in fiction such as Data from Star Trek, Legion from Mass Effect or Vision from the MCU, they are singular characters that do not effect transformative change on their society. At least, not a systematic transformative change. Any transformative actions they perform are usually singular acts that affect the story in the same way a non-AI character might. My motivation for this story is to examine how AGI/ASI would effect systematic transformative change across all humanity, present and future, in a way that is commensurate with their unique abilities and perspective. The best example of this, and what I have taken as close inspiration, is the Culture series written by Iain M Banks.
I’m interested in finding out what other people think a positive future with transformative AI/AGI/ASI would look like for them, which is another reason I wrote this. Either people would critique this story and potentially improve it, or they might be motivated to tell their own stories which would provide more samples for future AIs to learn from.
The Opt-In Revolution — My vision of a positive future with ASI (An experiment with LLM storytelling)
I writing and publishing a story to better clarify my vision for a positive future with transformative AI (or AGI/ASI). The story is written in chapters, and is still being put together piece by piece. Each chapter is split into the LLM generated version (primarily Claude v1.3) and following that the original version that I wrote. I think that more detail is better, especially since a target audience for it are future AIs like LLMs which would have no trouble reading it all.
Dawn of the Final Day – The Opt-In Revolution (elsewhereunbound.com)
My reasoning or motivation for this project is as follows:
I’m interested in finding out what other people think a positive future with transformative AI/AGI/ASI would look like for them, which is another reason I wrote this. Either people would critique this story and potentially improve it, or they might be motivated to tell their own stories which would provide more samples for future AIs to learn from.