Book review: The Quincunx
The Quincunx is a 1989 novel by Charles Palliser, set in early 1800s England. I want to recommend it to everyone because it’s really good, and it might be relevant to the AI transition. Let me try to explain.
The surface level of the book is a kind of mishmash of Dickensian themes. The main character is caught in a complicated inheritance dispute involving multiple families, each having histories of murder, uncertain parentage, stolen and returned documents and so on. The plot contains numerous puzzles that are fun to solve, the amount of planning is really kind of amazing, there are tons of details and everyone lies or makes mistakes but it still connects logically.
But the really interesting level of the book is the social level. The main character doesn’t just progress through a bunch of plot puzzles; he also starts out as a child of minor nobility and then moves through society downward. His journey is a kind of descent into hell, ending up in the lowest levels of poverty existing in the early 1800s. The book is very well researched in that regard, borrowing a lot from the fantastic “London Labor and the London Poor”. There are parallel plotlines involving rich and poor people, and the book paints a vivid picture of how the rich prey upon the poor.
England at that time was conducting enclosures. Basically, rich people put up fences around common land to graze sheep on it. The poor were left with no land to grow food on, and had to go somewhere else. They ended up in cities, living in slums, trying to find scarce work and giving their last pennies to slumlords. In short, it was a story of mass impoverishment of the population, conducted by the state and upper levels of society, who all benefited from it.
In the book we get a tour of all of it. From the countryside being hollowed out, to the city with the desperate search for work, the run-down lodgings, the drinking, prostitution, crime (we spend a bit of time with the protagonist living in a gang), the sometimes horrifying occupations that people are pushed into (like scrounging for coins in sewer tunnels under the city while avoiding tides). The injuries, disabilities, early deaths. Where Dickens called out specific social ills in order to fix them, like the workhouses in Oliver Twist, Palliser says society as a whole is unjust. His account is so historically detailed that it somehow transcends time, makes you feel that the same kind of events are happening now.
I think it’s especially important to not forget about such stories because they give an analogy to what might happen with the rise of AI. If AI can do your job cheaper than you, and can outbid you for resources you need to survive (most importantly land) - and there are many other tools available to AI and AI companies, like crafting messages to make you exchange your savings for consumption, or lobbying for laws with superhuman skill—then we might be facing the same kind of future as the poor in The Quincunx.
And the main reason I wanted to make this point, and write this review, is that AI alignment isn’t enough to prevent this. All above things can be done legally. Can be done with endorsement of the state, as the state happily benefits from AI as it did from enclosures. And they can be done by AI which is “aligned” to people, because historically these things were done by people. There’s nothing higher than people to align to. The regulator, the AI company boss and all these other nice people are no different in nature than the people back then. When given power, they’ll probably screw over the rest of us.
This about concludes the review, I can say I recommend the book to everyone. It’s a great puzzle-box book; it’s a carefully researched historical novel; it’s something of a bluepill book that can make you more socially conscious (as it did for me); and it might be a description of our future as well, if things go the way AI companies want.
- 5 Jun 2024 21:15 UTC; 7 points) 's comment on Former OpenAI Superalignment Researcher: Superintelligence by 2030 by (
This would require a scenario a lot like in the podcast we were talking about, where there’s a government-led project to get to transformative AI, and then rather than using that AI to dramatically help all humanity, the government instead decides to ban using AI to dramatically help all humanity (as a side effect of affirming the status quo and banning all uses of AI that threaten its own power), while still allowing limited access to this AI technology by the wealthy and powerful.
I actually don’t think this is that likely, despite the fact that some people claim to be aiming for this future (or some similar future where humans remain in control and capitalism doesn’t suffer a discontinuity). Even assuming this AI project doesn’t kill everyone or otherwise go wrong, I think in an egalitarian setting there’s overwhelming pressure to take transformative actions (save peoples’ lives etc), and even in a dictatorial or plutocratic setting there’s a lot of pressure to take transformative dictatorial actions (for your basic hedonist: kill off everyone they don’t care for to save resources, for your more refined dictator: subtly arrange events so that their preferred political decisions work wonderfully and produce a flourishing civilization full of people who view them as a great leader).
(Edited because my previous reply was a bit off the mark.)
I don’t think this scenario depends on government. If AI is better at all jobs and can make more efficient use of all resources, “AI does all jobs and uses all resources” is the efficient market outcome. All that’s needed is that companies align their AIs to the company’s money interest, and people use and adapt AI in the pursuit of money interest. Which is what’s happening now.
A single AI taking dramatic transformative action seems less likely to me, because it’ll have to take place in a world already planted thick with AI and near-AI following money interests.
Tangent to this post, but I read it by listening to the narration, and there are substantial differences between what the narration says is the text of this post and what text actually appears on the screen. I’ve noticed a smaller version of this with other posts in the past, but this time it seemed especially notable.
Yeah, I think the narration doesn’t catch up when I edit the post, and I’ve edited it a lot. Maybe there’s a button to refresh it but I haven’t found it. @habryka?
We work with T3Audio on the narration, and I think they don’t really update it after initial publication. It costs us some non-trivial amount of money (like $1 or so) to narrate a post, which means we can’t just re-narrate it on every edit without opening ourselves up to burning a bunch of money without reason. Not sure what the ideal thing here is.
Maybe instead narrating posts automatically when published, the poster could be shown a message like “Do you want to narrate this post right now? Once narrated, the audio cannot be changed.” And if they say no then there’s a button they can press to narrate it later (e.g. after editing). And maybe you could charge $1 if people want to change the audio after accepting their one free narration?
I’m not sure what’s the best solution in general. For this post specifically, maybe we could drop the narration?
On the other hand it was fun getting to hear an older version of the post and see what changed!
Historically, a purposeful decision that permanently lifts the whole population out of poverty was never on the table. Overall indifference doesn’t prevent occasional philanthropy, but philanthropists were not that rich. So if there is some alignment (in the pseudokindness sense), the main issue is surviving until some group that cares gets rich enough. Which is not straightforward, since destruction of biosphere is a default side effect of post-human scaling of industry, and moderation in overall indifference about humanity is crucial at that step.
I think hoping for “pseudokindness” doesn’t really work. You can have one-millionth care about a flower, but you’ll still pave it over if you have more-than-one-millionth desire for a parking lot there. And if we’re counting on AIs to have certain drives in tiny amounts, we shouldn’t just talk about kindness, but also for example desire for justice (leading to punishment and s-risk). So putting our hopes on these one-millionths feels really risky.
Pseudokindness is not quite kindness, it’s granting resources for some form of autonomous development with surviving boundaries. The hypothesis is that this is a naturally meaningful thing, not something that gets arbitrarily distorted by path-dependence of AI values, that is path-dependence mostly reduces its weight, but doesn’t change the target. Astronomical wealth then enables enclaves of philanthropically supported descendants of humanity, even if most AIs mostly don’t care.
The argument doesn’t say that there aren’t also hells, though on the hypothesis of naturality of pseudokindness that would be a concurrent thing, not an alternative. I don’t see as strong an argument for their naturality as that for pseudokindness, as this requires finding a place between not caring about humanity at all and the supposed attractor of caring about humanity correctly. The crux is whether that attractor is a real thing, possibly to a large degree due to the initial state of AIs as trained on humanity’s data.
Thanks for this! I ended up reading The Quincunx based on this review and really enjoyed it.
As an aside, I want to recommend a physical book instead of the Kindle version, for a couple reasons:
There are maps and genealogy diagrams interspersed between chapters, but they were difficult to impossible to read on the Kindle.
I discovered, only after finishing the book, that there’s a list of characters at the back of the book. This would have been extremely helpful to refer to as I was reading. There are a lot of characters and I can’t tell you how many times I tried highlighting someone’s name, hoping that Kindle’s X-Ray feature would work, and remind me who they were (since they may have only appeared hundreds of pages before). But it doesn’t seem to be enabled for this book.
(Also, without the physical book, I didn’t realize how long The Quincunx is.)
Even with those difficulties, a great read.