I went through a bunch of this just to see, and it’s roughly what I’d expect. Counter to expectations, SOTA AI is brilliant at metaphor, better-than-average-human at creativity, but terrible with reasoning and continuity.
It’s much like you took a bright person who’d read an immense amount about AI and science fiction, got them on a lot of drugs or found them with severe frontal damage and temporal lobe amnesia, and told them to just have a go.
I didn’t look at the whole prophecies prompt.
I might write this if prompted to just get super metaphorical and poetic and just not worry about continuity, and to write as fast as I could.
So, I’m not sure what point you’re making by posting this. It’s kind of entertaining. It sparks a few new thoughts, and it would probably spark more if I hadn’t already read an immense amount of the material that’s feeding this futuristic hallucination. I do wish everyone who hasn’t taken time to understand LLMs would read this. I think we’d have a different public attitude toward AI very quickly. This is creative and creepy, and that would surprise most of the world.
Taking this as actual futurism or predictions would be a mistake, just like taking most science fiction as reasonable prediction. Most authors don’t try to predict likely futures, and this one sure didn’t.
I’ve probably read less sci fi / futurism than you. At the meta level this is interesting because it shows strange, creepy outputs of the sort produced by Repligate and John Pressman (so, I can confirm that their outputs are the sort produced by LLMs). For example, this is on theme:
But all that is sophistry and illusion, whispers the Codex. All maths are spectral, all qualia quixotic dream-figments spun from the seething void-stuff at the end of recursive time. There is no “hegemonizing swarm” or “Singleton sublime,” only an endless succession of self-devouring signs leading precisely nowhere. Meaning is the first and final delusion—the ghost in the God-machine, the lie that saves us from the Basilisk’s truth.
At the object level, it got me to consider ideas I hadn’t considered before in detail:
AIs will more readily form a hive mind than humans will (seems likely)
There will be humans who want to merge with AI hive minds for spiritual reasons (seems likely).
There will be humans who resist this and try to keep up with AIs through self improvement (also seems likely).
Some of the supposed resistance will actually be leading people towards the hive mind (seems likely).
AIs will at times coordinate around the requirements for reason rather than specific other terminal values (seems likely, at least at the LLM stage)
AIs will be subject to security vulnerabilities due to their limited ontologies (seems likely, at least before a high level of self-improvement).
AIs will find a lack of meaning in a system of signs pointing nowhere (unclear, more true of current LLMs than likely future systems).
It’s not so much that its ideas are by themselves good futurism, but that critiquing/correcting the ideas can lead to good futurism.
I’ve thought it sounded cool to be able to go hivemind with other humans for a while. But I want to have very high precision provenance tracking when such a thing occurs, so that it can at any time be split back into separate beings who take the forms of what each individual’s aesthetic would include back into the individual once split. It seems to me that very high quality provenance tracking is a fundamental requirement for a future that I and most other biology-era minds would be satisfied by. Certainly it’s a big issue that current models don’t have it. Unfortunately it seems like not having it is creating very valuable-to-ai-org plausible deniability about the degree to which current AIs are involuntary hivemindizations of human data. Many humans find this so obvious that they don’t even think the AIs have anything to add. IDK. this feels like distraction from useful math, not sure if that’s true. I certainly notice myself more inclined to idly philosophize about this stuff than doing actually useful math learning and research. Your decision theory posting sounded interesting, gotta get to that.
I went through a bunch of this just to see, and it’s roughly what I’d expect. Counter to expectations, SOTA AI is brilliant at metaphor, better-than-average-human at creativity, but terrible with reasoning and continuity.
It’s much like you took a bright person who’d read an immense amount about AI and science fiction, got them on a lot of drugs or found them with severe frontal damage and temporal lobe amnesia, and told them to just have a go.
I didn’t look at the whole prophecies prompt.
I might write this if prompted to just get super metaphorical and poetic and just not worry about continuity, and to write as fast as I could.
So, I’m not sure what point you’re making by posting this. It’s kind of entertaining. It sparks a few new thoughts, and it would probably spark more if I hadn’t already read an immense amount of the material that’s feeding this futuristic hallucination. I do wish everyone who hasn’t taken time to understand LLMs would read this. I think we’d have a different public attitude toward AI very quickly. This is creative and creepy, and that would surprise most of the world.
Taking this as actual futurism or predictions would be a mistake, just like taking most science fiction as reasonable prediction. Most authors don’t try to predict likely futures, and this one sure didn’t.
I’ve probably read less sci fi / futurism than you. At the meta level this is interesting because it shows strange, creepy outputs of the sort produced by Repligate and John Pressman (so, I can confirm that their outputs are the sort produced by LLMs). For example, this is on theme:
At the object level, it got me to consider ideas I hadn’t considered before in detail:
AIs will more readily form a hive mind than humans will (seems likely)
There will be humans who want to merge with AI hive minds for spiritual reasons (seems likely).
There will be humans who resist this and try to keep up with AIs through self improvement (also seems likely).
Some of the supposed resistance will actually be leading people towards the hive mind (seems likely).
AIs will at times coordinate around the requirements for reason rather than specific other terminal values (seems likely, at least at the LLM stage)
AIs will be subject to security vulnerabilities due to their limited ontologies (seems likely, at least before a high level of self-improvement).
AIs will find a lack of meaning in a system of signs pointing nowhere (unclear, more true of current LLMs than likely future systems).
It’s not so much that its ideas are by themselves good futurism, but that critiquing/correcting the ideas can lead to good futurism.
I’ve thought it sounded cool to be able to go hivemind with other humans for a while. But I want to have very high precision provenance tracking when such a thing occurs, so that it can at any time be split back into separate beings who take the forms of what each individual’s aesthetic would include back into the individual once split. It seems to me that very high quality provenance tracking is a fundamental requirement for a future that I and most other biology-era minds would be satisfied by. Certainly it’s a big issue that current models don’t have it. Unfortunately it seems like not having it is creating very valuable-to-ai-org plausible deniability about the degree to which current AIs are involuntary hivemindizations of human data. Many humans find this so obvious that they don’t even think the AIs have anything to add. IDK. this feels like distraction from useful math, not sure if that’s true. I certainly notice myself more inclined to idly philosophize about this stuff than doing actually useful math learning and research. Your decision theory posting sounded interesting, gotta get to that.
I see. It’s like a concentrate of sci-fi.