occured to me belatedly to consider what tools mainstream philosophy has to deal with the “train to crazy town” problem since i’m running a meetup on it all of my required and supplemental readings come from various rationalists/eas/adjs and this is kinda insular. claude pointed me to the concept of reflective equilibrium.
Equilibrium is reached where principles and judgments have been revised such that they agree with each other. In short, the method of reflective equilibrium is the mutual adjustment of principles and judgments in the light of relevant argument and theory.
it’s the “dominant method in moral and political philosophy”:
Its advocates suggest that “it is the only rational game in town for the moral theorist” or “the only defensible method” (DePaul 1993: 6; Scanlon 2003: 149; see also Freeman 2007: 35–36; Floyd 2017: 377–378). Though often endorsed, it is far more frequently used. Wherever a philosopher presents principles, motivated by arguments and examples, they are likely to be using the method. They are adjusting their principles—and with luck, their readers’—to the judgments suggested by the arguments and examples. Alternatively they might “bite the bullet” by adjusting initially discordant judgments to accommodate otherwise appealing principles. Either way, they are usually describing a process of reflective equilibrium, with principles adjusted to judgments or vice versa.
its objections section is substantive and points out that this is basically an intellectually empty methodology due to all the shenanigans one can pull when the method is functionally “just think about stuff until the vibes feel right”. what’s the response to that?
By this point the critic may be exasperated. If you identify a problem with someone’s way of doing philosophy, and they agree that it’s a problem, you might expect them to change how they do it. But the adherent of wide reflective equilibrium accepts the criticism but maintains their method, saying that they have adopted the criticism within the method. To critics this suggests that the method is “close to vacuous” (Singer 2005: 349), absorbing methodological controversies rather than adjudicating them (McPherson 2015: 661; Paulo 2020: 346; de Maagt 2017: 458). It just takes us back to the usual philosophical argument about the merits and demerits of various methods of argument and of various theories. The method of reflective equilibrium is then not a method in moral philosophy at all. (Raz 1982: 309) Defenders of wide reflective equilibrium describe it in similar terms to the critics, while rejecting their negative evaluation. Its ability to absorb apparent rivals is seen as a feature, not a bug. [emphasis mine]
i… hate this? it’s like ea judo’s evil twin. the article ends by pointing out a bunch of philosophical methods and theories that are incompatible with reflective equilibrium but basically shrugs its shoulders and goes oh well, it’s the dominant paradigm and no one serious is particularly interested in tearing it down.
i kinda thought that ey’s anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao. semantic stopsign ass framework
In case you haven’t seen it, there’s an essay on the EA forum about a paper by Tyler Cowen which argues that there’s no way to “get off” the train to crazy town. I.e. it may be a fundamental limitation of utilitarianism plus scope sensitivity, that this moral framework necessarily collapses everything into a single value (utility) to optimize at the expense of everything else. Some excerpts:
So, the problem is this. Effective Altruism wants to be able to say that things other than utility matter—not just in the sense that they have some moral weight, but in the sense that they can actually be relevant to deciding what to do, not just swamped by utility calculations. Cowen makes the condition more precise, identifying it as the denial of the following claim: given two options, no matter how other morally-relevant factors are distributed between the options, you can always find a distribution of utility such that the option with the larger amount of utility is better. The hope that you can have ‘utilitarianism minus the controversial bits’ relies on denying precisely this claim. …
Now, at the same time, Effective Altruists also want to emphasise the relevance of scale to moral decision-making. The central insight of early Effective Altruists was to resist scope insensitivity and to begin systematically examining the numbers involved in various issues. ‘Longtermist’ Effective Altruists are deeply motivated by the idea that ‘the future is vast’: the huge numbers of future people that could potentially exist gives us a lot of reason to try to make the future better. The fact that some interventions produce so much more utility—do so much more good—than others is one of the main grounds for prioritising them. So while it would technically be asolution to our problem to declare (e.g.) that considerations of utility become effectively irrelevant once the numbers get too big, that would be unacceptable to Effective Altruists. Scale matters in Effective Altruism (rightly so, I would say!), and it doesn’t just stop mattering after some point.
So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values. …
I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes. …
Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.
i agree that there doesn’t seem to be any sort of rigorous way to get off the crazy train in some principled manner, and that fundamentally it does come down to vibes. but that only makes it worse if people are uncritical/uncurious/uncaring/unrigorous about how said vibes are generated. like, i see angst about it in the ea sphere about the inconsistency/intransitivity, and various attempts to discuss or tackle it, and this seems useful to me even though it’s still mostly groping around in the dark. in academia there seems to be a missing mood.
Closest antecedents in academic metaethics are Rawls and Goodman’s reflective equilibrium, Harsanyi and Railton’s ideal advisor theories, and Frank Jackson’s moral functionalism.
this week’s meetup is on the train to crazy town. it was fun putting together all the readings and discussion questions, and i’m optimistic about how the meetup’s going to turn out! (i mean, in general, i don’t run meetups i’m not optimistic about, so i guess that’s not saying much.) im slightly worried about some folks coming in and just being like “this metaphor is entirely unproductive and sucks”, should consider how to frame the meetup productively to such folks.
i think one of my strengths as an organizer is that ive read sooooo much stuff and so its relatively easy for me to pull together cohesive readings for any meetup. but ultimately im not sure if it’s like, the most important work, to e.g. put together a bibliography of the crazy town idea and its various appearances since 2021. still, it’s fun to do.
we’re getting a dozen people and having to split into 2 groups on the regular! discussion was undirected but fun (one group got derailed bc someone read the shrimp welfare piece and updated so that suffering isn’t inherently bad in their value system and this kind of sniped the rest of us).
feel like I didn’t get a lot out of it intellectually though since we didn’t engage significantly with the metaphor. it was interesting how people (including me) seem to shy away from the fact that our defacto moral system bottoms out at vibes.
i fear this week’s meetup might have an unusually large amount of “guy who is very into theoretical tabletop game design but has never playtested their products which have lovely readable manuals” energy, but i like the topic a lot and am having an unsually hard time killing my darlings :’)
actually it was really good! people had lots to say about the subject even without any prompting by the discussion questions. they were nice to have on standby though.
the default number of baguettes to buy per meetup should be increased to 3.
occured to me belatedly to consider what tools mainstream philosophy has to deal with the “train to crazy town” problem since i’m running a meetup on it all of my required and supplemental readings come from various rationalists/eas/adjs and this is kinda insular. claude pointed me to the concept of reflective equilibrium.
per its SEP page,
it’s the “dominant method in moral and political philosophy”:
its objections section is substantive and points out that this is basically an intellectually empty methodology due to all the shenanigans one can pull when the method is functionally “just think about stuff until the vibes feel right”. what’s the response to that?
i… hate this? it’s like ea judo’s evil twin. the article ends by pointing out a bunch of philosophical methods and theories that are incompatible with reflective equilibrium but basically shrugs its shoulders and goes oh well, it’s the dominant paradigm and no one serious is particularly interested in tearing it down.
i kinda thought that ey’s anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao. semantic stopsign ass framework
In case you haven’t seen it, there’s an essay on the EA forum about a paper by Tyler Cowen which argues that there’s no way to “get off” the train to crazy town. I.e. it may be a fundamental limitation of utilitarianism plus scope sensitivity, that this moral framework necessarily collapses everything into a single value (utility) to optimize at the expense of everything else. Some excerpts:
i agree that there doesn’t seem to be any sort of rigorous way to get off the crazy train in some principled manner, and that fundamentally it does come down to vibes. but that only makes it worse if people are uncritical/uncurious/uncaring/unrigorous about how said vibes are generated. like, i see angst about it in the ea sphere about the inconsistency/intransitivity, and various attempts to discuss or tackle it, and this seems useful to me even though it’s still mostly groping around in the dark. in academia there seems to be a missing mood.
He actually cites reflective equilibrium here:
this week’s meetup is on the train to crazy town. it was fun putting together all the readings and discussion questions, and i’m optimistic about how the meetup’s going to turn out! (i mean, in general, i don’t run meetups i’m not optimistic about, so i guess that’s not saying much.) im slightly worried about some folks coming in and just being like “this metaphor is entirely unproductive and sucks”, should consider how to frame the meetup productively to such folks.
i think one of my strengths as an organizer is that ive read sooooo much stuff and so its relatively easy for me to pull together cohesive readings for any meetup. but ultimately im not sure if it’s like, the most important work, to e.g. put together a bibliography of the crazy town idea and its various appearances since 2021. still, it’s fun to do.
we’re getting a dozen people and having to split into 2 groups on the regular! discussion was undirected but fun (one group got derailed bc someone read the shrimp welfare piece and updated so that suffering isn’t inherently bad in their value system and this kind of sniped the rest of us).
feel like I didn’t get a lot out of it intellectually though since we didn’t engage significantly with the metaphor. it was interesting how people (including me) seem to shy away from the fact that our defacto moral system bottoms out at vibes.
i fear this week’s meetup might have an unusually large amount of “guy who is very into theoretical tabletop game design but has never playtested their products which have lovely readable manuals” energy, but i like the topic a lot and am having an unsually hard time killing my darlings :’)
actually it was really good! people had lots to say about the subject even without any prompting by the discussion questions. they were nice to have on standby though.
the default number of baguettes to buy per meetup should be increased to 3.