i agree that there doesn’t seem to be any sort of rigorous way to get off the crazy train in some principled manner, and that fundamentally it does come down to vibes. but that only makes it worse if people are uncritical/uncurious/uncaring/unrigorous about how said vibes are generated. like, i see angst about it in the ea sphere about the inconsistency/intransitivity, and various attempts to discuss or tackle it, and this seems useful to me even though it’s still mostly groping around in the dark. in academia there seems to be a missing mood.
jenn
occured to me belatedly to consider what tools mainstream philosophy has to deal with the “train to crazy town” problem since i’m running a meetup on it all of my required and supplemental readings come from various rationalists/eas/adjs and this is kinda insular. claude pointed me to the concept of reflective equilibrium.
per its SEP page,
Equilibrium is reached where principles and judgments have been revised such that they agree with each other. In short, the method of reflective equilibrium is the mutual adjustment of principles and judgments in the light of relevant argument and theory.
it’s the “dominant method in moral and political philosophy”:
Its advocates suggest that “it is the only rational game in town for the moral theorist” or “the only defensible method” (DePaul 1993: 6; Scanlon 2003: 149; see also Freeman 2007: 35–36; Floyd 2017: 377–378). Though often endorsed, it is far more frequently used. Wherever a philosopher presents principles, motivated by arguments and examples, they are likely to be using the method. They are adjusting their principles—and with luck, their readers’—to the judgments suggested by the arguments and examples. Alternatively they might “bite the bullet” by adjusting initially discordant judgments to accommodate otherwise appealing principles. Either way, they are usually describing a process of reflective equilibrium, with principles adjusted to judgments or vice versa.
its objections section is substantive and points out that this is basically an intellectually empty methodology due to all the shenanigans one can pull when the method is functionally “just think about stuff until the vibes feel right”. what’s the response to that?
By this point the critic may be exasperated. If you identify a problem with someone’s way of doing philosophy, and they agree that it’s a problem, you might expect them to change how they do it. But the adherent of wide reflective equilibrium accepts the criticism but maintains their method, saying that they have adopted the criticism within the method. To critics this suggests that the method is “close to vacuous” (Singer 2005: 349), absorbing methodological controversies rather than adjudicating them (McPherson 2015: 661; Paulo 2020: 346; de Maagt 2017: 458). It just takes us back to the usual philosophical argument about the merits and demerits of various methods of argument and of various theories. The method of reflective equilibrium is then not a method in moral philosophy at all. (Raz 1982: 309) Defenders of wide reflective equilibrium describe it in similar terms to the critics, while rejecting their negative evaluation. Its ability to absorb apparent rivals is seen as a feature, not a bug. [emphasis mine]
i… hate this? it’s like ea judo’s evil twin. the article ends by pointing out a bunch of philosophical methods and theories that are incompatible with reflective equilibrium but basically shrugs its shoulders and goes oh well, it’s the dominant paradigm and no one serious is particularly interested in tearing it down.
i kinda thought that ey’s anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao. semantic stopsign ass framework
Thanks for writing this post! I think it’s insightful, and agree about technical truthtelling being annoying. After I thought about it though, I come down on the side of disagreeing with your post, largely on practical grounds.
A few thoughts:
You propose: Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action. This is fine, but the rest of the section doesn’t make it clear that by default there are very few circumstances where it seems theoretically positive EV to lie (I think this situation happens once or twice a year for me at most, certainly not enough for there to be good feedback loops.) Lies are annoying to keep track of, they bite you in the ass often, and even if you’re fine with lying, most people are bad at it. This means that the average liar will develop a reputation for dishonesty over time, which people generally won’t tell you about, but will tell other people in your social network so they know to watch out. More explicitly, I disagree with the idea that since each person is on average not paying attention, lying is easy. This is because people love to gossip about other people in their social circle who are acting weird, and being noticed by any person means that the information will propagate across the group.
You propose: Practice lying. Same as Tangled, this only works if you start very young. If you do this after high school, you will permanently burn social capital! In the case of you doing so with non-consensual subjects, you will be caught because you are bad at it, and people will think that you are deceptive or weird. In the case where you find parties who can actively help you become a more dishonest person, those people will reasonably trust you less, and also it seems generally unwise to trust such parties.
Re: developing the skill of detecting the relative honesty of other people: I agree that this is a good skill to have, and that “people will lie to you” is a good hypothesis to entertain on a regular basis. However this is a separate skill tree, and also one where facts and logic™ can thankfully save you. I’m not terrible at assessing vibes, decent at thinking about if stories check out, and I also can tap into the network of mutual acquaintances if something seems subtly off or weird about a person. This has not made me any less terrible at lying.
Advocating for more lying seems like especially bad advice to give to people with poor social skills, because they lack the skills to detect if they’re succeeding at learning how to lie or if they’re just burning what little social capital they have for no gain. For people with poor social skills, I recommend, like, reading books about improving your social skills or discussing their confusions with friends who are more clued in, and for autistic people I recommend developing a better model of how neurotypicals think. I have disagreements with some of the proposed models in the book, but I think A Field Guide to Earthlings by Ian Ford is a good place to start.
The flip side to the average person not being totally honest, is that if you can credibly signal that you are unusually honest using expensive signals, there actually are many niches for you in the world, and people pay attention to that too. I touch on this in a previous post of mine on unusually scrupulous non-EA charities. While it’s true that a few folks on the website can stand to become a little savvier socially[1], I think in general it would be better if they chose to play to their advantage. This seems like the higher EV route to me. And this is actually one of the reasons that I’m annoyed about technical truth telling—people who practice it are technically honest but they’re not even getting any good reputation for it because they’re functionally deceiving people, badly.
All of the best things in my life came from moments where it felt very scary to tell the truth, and then I was brave and did so anyways.
- ^
i think this case is generally overstated, btw. its true that some lw people are bad at social skills but i think the median user is probably fine.
this week’s meetup is on the train to crazy town. it was fun putting together all the readings and discussion questions, and i’m optimistic about how the meetup’s going to turn out! (i mean, in general, i don’t run meetups i’m not optimistic about, so i guess that’s not saying much.) im slightly worried about some folks coming in and just being like “this metaphor is entirely unproductive and sucks”, should consider how to frame the meetup productively to such folks.
i think one of my strengths as an organizer is that ive read sooooo much stuff and so its relatively easy for me to pull together cohesive readings for any meetup. but ultimately im not sure if it’s like, the most important work, to e.g. put together a bibliography of the crazy town idea and its various appearances since 2021. still, it’s fun to do.
The Train to Crazy Town
actually it was really good! people had lots to say about the subject even without any prompting by the discussion questions. they were nice to have on standby though.
the default number of baguettes to buy per meetup should be increased to 3.
jenn’s Shortform
i fear this week’s meetup might have an unusually large amount of “guy who is very into theoretical tabletop game design but has never playtested their products which have lovely readable manuals” energy, but i like the topic a lot and am having an unsually hard time killing my darlings :’)
The Colours of Her Coat
It felt productive. My day job (running a policy nonprofit) involves a lot of vibes/reacting to Current Thing and not a great deal of rigorously solving hard problems, and the exercise usefully… crystallized? a vague, vibes-based framework that I follow when I do strategy planning—set timers, generate plans, set probabilities, check surprise, go meta, iterate, etc. It’s nice to have that operationalized!
I played the first third or so of this game when it first came out, and haven’t touched it since then. We did two rounds of the exercise, interspersed with 30 minutes of playing Baba is You levels the regular way to build up more intuition (most attendees were either new to the game or haven’t played it for years). Some people paired up and some people did the exercise individually.
I did Tiny Pond for the first workshop independently, and found it very difficult—despite running through the strategizing and metastrategizing twice, I was still very stuck.
I did The River for the second workshop (after running through the first few levels of Baba is You again). This time I paired up with someone else, and we were able to get to the correct solution after the first round of strategizing.
Thanks for writing this up, my meetup group just ran a meetup on this. I’ve told the folks here to give their experiences with the workshop here, because we used pen and paper instead of the google doc.
good point! two other low-context meetups happen by default every year, the spring and fall ACX megameetups. I also do try to do a few silly meetups a year that are low context.
Baba is Planmaking
Meetups Notes (Q1 2025)
Waterloo – ACX Meetups Everywhere Spring 2025
Critically Reading Scott Alexander
2025 ACX Spring Megameetup
I put four questions into the survey that formed a loop.
what omg this is the coolest thing ever. kudos!
we’re getting a dozen people and having to split into 2 groups on the regular! discussion was undirected but fun (one group got derailed bc someone read the shrimp welfare piece and updated so that suffering isn’t inherently bad in their value system and this kind of sniped the rest of us).
feel like I didn’t get a lot out of it intellectually though since we didn’t engage significantly with the metaphor. it was interesting how people (including me) seem to shy away from the fact that our defacto moral system bottoms out at vibes.