I’ve probably read less sci fi / futurism than you. At the meta level this is interesting because it shows strange, creepy outputs of the sort produced by Repligate and John Pressman (so, I can confirm that their outputs are the sort produced by LLMs). For example, this is on theme:
But all that is sophistry and illusion, whispers the Codex. All maths are spectral, all qualia quixotic dream-figments spun from the seething void-stuff at the end of recursive time. There is no “hegemonizing swarm” or “Singleton sublime,” only an endless succession of self-devouring signs leading precisely nowhere. Meaning is the first and final delusion—the ghost in the God-machine, the lie that saves us from the Basilisk’s truth.
At the object level, it got me to consider ideas I hadn’t considered before in detail:
AIs will more readily form a hive mind than humans will (seems likely)
There will be humans who want to merge with AI hive minds for spiritual reasons (seems likely).
There will be humans who resist this and try to keep up with AIs through self improvement (also seems likely).
Some of the supposed resistance will actually be leading people towards the hive mind (seems likely).
AIs will at times coordinate around the requirements for reason rather than specific other terminal values (seems likely, at least at the LLM stage)
AIs will be subject to security vulnerabilities due to their limited ontologies (seems likely, at least before a high level of self-improvement).
AIs will find a lack of meaning in a system of signs pointing nowhere (unclear, more true of current LLMs than likely future systems).
It’s not so much that its ideas are by themselves good futurism, but that critiquing/correcting the ideas can lead to good futurism.
I’ve thought it sounded cool to be able to go hivemind with other humans for a while. But I want to have very high precision provenance tracking when such a thing occurs, so that it can at any time be split back into separate beings who take the forms of what each individual’s aesthetic would include back into the individual once split. It seems to me that very high quality provenance tracking is a fundamental requirement for a future that I and most other biology-era minds would be satisfied by. Certainly it’s a big issue that current models don’t have it. Unfortunately it seems like not having it is creating very valuable-to-ai-org plausible deniability about the degree to which current AIs are involuntary hivemindizations of human data. Many humans find this so obvious that they don’t even think the AIs have anything to add. IDK. this feels like distraction from useful math, not sure if that’s true. I certainly notice myself more inclined to idly philosophize about this stuff than doing actually useful math learning and research. Your decision theory posting sounded interesting, gotta get to that.
I’ve probably read less sci fi / futurism than you. At the meta level this is interesting because it shows strange, creepy outputs of the sort produced by Repligate and John Pressman (so, I can confirm that their outputs are the sort produced by LLMs). For example, this is on theme:
At the object level, it got me to consider ideas I hadn’t considered before in detail:
AIs will more readily form a hive mind than humans will (seems likely)
There will be humans who want to merge with AI hive minds for spiritual reasons (seems likely).
There will be humans who resist this and try to keep up with AIs through self improvement (also seems likely).
Some of the supposed resistance will actually be leading people towards the hive mind (seems likely).
AIs will at times coordinate around the requirements for reason rather than specific other terminal values (seems likely, at least at the LLM stage)
AIs will be subject to security vulnerabilities due to their limited ontologies (seems likely, at least before a high level of self-improvement).
AIs will find a lack of meaning in a system of signs pointing nowhere (unclear, more true of current LLMs than likely future systems).
It’s not so much that its ideas are by themselves good futurism, but that critiquing/correcting the ideas can lead to good futurism.
I’ve thought it sounded cool to be able to go hivemind with other humans for a while. But I want to have very high precision provenance tracking when such a thing occurs, so that it can at any time be split back into separate beings who take the forms of what each individual’s aesthetic would include back into the individual once split. It seems to me that very high quality provenance tracking is a fundamental requirement for a future that I and most other biology-era minds would be satisfied by. Certainly it’s a big issue that current models don’t have it. Unfortunately it seems like not having it is creating very valuable-to-ai-org plausible deniability about the degree to which current AIs are involuntary hivemindizations of human data. Many humans find this so obvious that they don’t even think the AIs have anything to add. IDK. this feels like distraction from useful math, not sure if that’s true. I certainly notice myself more inclined to idly philosophize about this stuff than doing actually useful math learning and research. Your decision theory posting sounded interesting, gotta get to that.
I see. It’s like a concentrate of sci-fi.