Epistemic status: Somewhat undirected and not very serious. The usual politics warning.
As Usual, the Story Begins in the Human Ancestral Environment
For >90% of human history (and further back, into pre-human primate history), Homo sapiens lived in small bands of about 50. In that social environment, it made a lot of strategic sense to spend a lot of time actively thinking about alliances and social norms in your band, so as to ally with the right people and to look impressive in debates. In that social world, your status in the eyes of your band was everything. Rise in the band’s status hierarchy and gain access to a whole slew of valuable favors. Fall, and face ridicule, social isolation, restricted access to resources, and maybe exile and death alone in the wild. In other words, social status is humanity’s native currency, and we are fine tuned to be good at acquiring and protecting it. One of the main ways we did that was by getting good at small-group politics. We are all descendants of those adept in ancestral politics, and so have inherited specialized cognitive machinery built for excelling in those interactions.
The question “If you were a society, what kind of society would you be?” is strangely existential. Some people are bland liberal democracies. Some people are tropical island paradises. Some people are extremely efficient Singaporean city-states. But anyone at all interesting is something that has never quite existed before on Earth. Tolkien was the Elves. I don’t know much about Iain Banks, but it wouldn’t surprise me if he was the Culture.
One guy on Micras is a libertarian. He just sort of hangs around going “Yup, my country’s government still isn’t doing anything. Just hanging around punishing the initiation of force.” It’s very cute.
It makes you examine your soul, conworlding does. Over the centuries, changes in your outlook are mirrored by revolutions in your country’s government. The problems debated in its universities and great books are the problems you struggle with every day. Sometimes your values and aesthetics drift, and some fictional philosopher mirrors the change across a span of worlds. Very rarely, it is the fictional philosopher who makes a good point that the real you is forced to consider.
That’s an incredibly evocative passage, just a ton of fun to chew on! When, on the other hand, I read Yudkowskyonpolitics for the first time, my takeaway was that that kind of musing is bad for your sanity. If you want to level up your Rationality, because you have something to protect, then practice making predictions about just about any domain but politics. I buy that argument, but have found that conclusion usually difficult to carry out. Thinking about politics in that particular utopian, useless way is just a lot of fun. It scratches the same impulse that thinking about RTS factions did when I was a kid: it’s just fun to be in charge of everything with only the real-world checks you want to onboard, SimCity-style, and take your best stab at working things out as absolute dictator. It’s a fun combination puzzle-and-aesthetic-exercise.
Utopian Design, One Level Up
Here’s an attempt at reconciling that utopian impulse, which likes listening to and chewing on the (admittedly weakly infohazardous) political concepts people bandy around all day, with the rationalist I want to be, which worries about what all that intellectual junk food is doing to my brain.
When people are asked what single policy proposal they’d unilaterally impose, if they were able to, they generally opt for object-level reforms (zoning reform, universal healthcare, whatever). But their one policy choice is probably better spent on a meta-level reform, which would change the way in which future policies are enacted such that those future policies will be reliably better than they are now.[1] Designing your utopia at the object-level is like that: it’s preparing for the possible world in which you become world dictator and are prohibited from consulting outside advisors in designing the new world order. If you actually came to rule the world, the one of the very first things you’d want to do is go find a lot of very smart people and start outsourcing that design work to them. With all the world’s resources at your disposal, you wouldn’t have to personally work out what the street plans should look like. What you can do instead is design your utopia at the meta-level: what decision algorithm would I want to install such that it would implement all and only the best object-level policies for me?
My suggestion is that this is a slightly epistemically healthier diversion than playing mental Sid Meier’s Alpha Centauri all the time. For example, rather than draft up every utopian bill myself, in my own hand, the world I want to live in would vote on their values, to come up with a social utility-function, but bet on all their beliefs, to work out which policies would fare best relative to that utility function. Once we’re spending our time thinking about what mechanisms reliably converge on truth the quickest, we’re doing something a whole lot more epistemically fruitful and a whole lot less like tribe-vs.-tribe status-squabbling. A neat feature of utopianism at this level is that people with very different object-level political views and preferences can agree on schemes. However your world models disagree, if you and someone else can agree on a mechanism for finding truths and implementing policies conditional on what it finds, both of you would agree to defer to that mechanism (each being confident beforehand that it’ll recommend your obviously correct policy suggestion). Of course, even in a world where prediction markets decide everything, someone still has to actually do the legwork of coming up with theories and betting based on them. And I have my best guesses about the winning bets in futarchist America. But I feel like this flavor of policy debate mostly entails thinking through how to reliably divine truths, rather than thinking through how you’ll show the imbecilic outgroup how wrong they were once you’re in charge. I’m happy to argue epistemic mechanisms with someone, but I tend clam up when people want to talk object-level hot-button politics of the day, as those conversations are always epistemically abysmal.
The Irresistible Attraction of Designing Your Own Utopia
Epistemic status: Somewhat undirected and not very serious. The usual politics warning.
As Usual, the Story Begins in the Human Ancestral Environment
For >90% of human history (and further back, into pre-human primate history), Homo sapiens lived in small bands of about 50. In that social environment, it made a lot of strategic sense to spend a lot of time actively thinking about alliances and social norms in your band, so as to ally with the right people and to look impressive in debates. In that social world, your status in the eyes of your band was everything. Rise in the band’s status hierarchy and gain access to a whole slew of valuable favors. Fall, and face ridicule, social isolation, restricted access to resources, and maybe exile and death alone in the wild. In other words, social status is humanity’s native currency, and we are fine tuned to be good at acquiring and protecting it. One of the main ways we did that was by getting good at small-group politics. We are all descendants of those adept in ancestral politics, and so have inherited specialized cognitive machinery built for excelling in those interactions.
Here in the third millennium, people are still running those politics-obsessed neural algorithms. But our contemporary social environment is not the social environment those algorithms are expecting. In our social world, rational political ignorance is probably the winning political strategy, given how little control any of us have over any country’s decision-making.
The Irresistible Attraction of Utopian Musings and RTS Faction Design
Despite being extremely confident the likelihood of this future is negligible, my brain really enjoys devoting lots of time and energy to developing and reflecting on my design for a national (or global) utopia, presumably to be promulgated once I come to absolute power. C.f. Scott Alexander:
That’s an incredibly evocative passage, just a ton of fun to chew on! When, on the other hand, I read Yudkowsky on politics for the first time, my takeaway was that that kind of musing is bad for your sanity. If you want to level up your Rationality, because you have something to protect, then practice making predictions about just about any domain but politics. I buy that argument, but have found that conclusion usually difficult to carry out. Thinking about politics in that particular utopian, useless way is just a lot of fun. It scratches the same impulse that thinking about RTS factions did when I was a kid: it’s just fun to be in charge of everything with only the real-world checks you want to onboard, SimCity-style, and take your best stab at working things out as absolute dictator. It’s a fun combination puzzle-and-aesthetic-exercise.
Utopian Design, One Level Up
Here’s an attempt at reconciling that utopian impulse, which likes listening to and chewing on the (admittedly weakly infohazardous) political concepts people bandy around all day, with the rationalist I want to be, which worries about what all that intellectual junk food is doing to my brain.
When people are asked what single policy proposal they’d unilaterally impose, if they were able to, they generally opt for object-level reforms (zoning reform, universal healthcare, whatever). But their one policy choice is probably better spent on a meta-level reform, which would change the way in which future policies are enacted such that those future policies will be reliably better than they are now.[1] Designing your utopia at the object-level is like that: it’s preparing for the possible world in which you become world dictator and are prohibited from consulting outside advisors in designing the new world order. If you actually came to rule the world, the one of the very first things you’d want to do is go find a lot of very smart people and start outsourcing that design work to them. With all the world’s resources at your disposal, you wouldn’t have to personally work out what the street plans should look like. What you can do instead is design your utopia at the meta-level: what decision algorithm would I want to install such that it would implement all and only the best object-level policies for me?
My suggestion is that this is a slightly epistemically healthier diversion than playing mental Sid Meier’s Alpha Centauri all the time. For example, rather than draft up every utopian bill myself, in my own hand, the world I want to live in would vote on their values, to come up with a social utility-function, but bet on all their beliefs, to work out which policies would fare best relative to that utility function. Once we’re spending our time thinking about what mechanisms reliably converge on truth the quickest, we’re doing something a whole lot more epistemically fruitful and a whole lot less like tribe-vs.-tribe status-squabbling. A neat feature of utopianism at this level is that people with very different object-level political views and preferences can agree on schemes. However your world models disagree, if you and someone else can agree on a mechanism for finding truths and implementing policies conditional on what it finds, both of you would agree to defer to that mechanism (each being confident beforehand that it’ll recommend your obviously correct policy suggestion). Of course, even in a world where prediction markets decide everything, someone still has to actually do the legwork of coming up with theories and betting based on them. And I have my best guesses about the winning bets in futarchist America. But I feel like this flavor of policy debate mostly entails thinking through how to reliably divine truths, rather than thinking through how you’ll show the imbecilic outgroup how wrong they were once you’re in charge. I’m happy to argue epistemic mechanisms with someone, but I tend clam up when people want to talk object-level hot-button politics of the day, as those conversations are always epistemically abysmal.
I read this somewhere, from someone in the rationalish blogosphere, but I can’t for the life of me track down where that was.