SSA vs. SIA: how future population may provide evidence for or against the foundations of political liberalism
Disclaimer
The ideas I will present here combine some concepts that are likely familiar to people on LessWrong (i.e., anthropic reasoning, Doomsday argument, Bayesian inference) with some that may not be (i.e., Heideggerian ontology). This combination of ideas well-grounded in rationality with some that are not is inherently somewhat unusual. Nevertheless, I am confident that people on LessWrong will not dismiss them simply because they are unusual but rather respond with rational arguments.
Summary
I’ve recently revisited the Doomsday argument and realized that some of the fundamental questions regarding sampling are very similar to the questions at the root of political philosophy. At this point, I think they are actually the same thing and that the combined knowledge of one’s birth rank and the total cumulative population reached may provide Bayesian evidence for or against the foundational assumptions of political liberalism. Specifically, I posit that liberalism is likely to be true if and only if the total cumulative population reached is implied to be highly improbable by the standard Doomsday argument.
Introduction
Many arguments in political philosophy are implicitly (or explicitly) about the existence of an original position from which arguments can be made. Liberalism (or, perhaps more accurately, left-liberalism) supposes the existence of such a prior state (that is, a state prior to the specification of existential attributes) as part of the nature of human Being. John Rawls is the most famous proponent of this position. On the other side of the argument are thinkers like Martin Heidegger who claim that the prior state of human Being is not some Rawlsian original position in which the subject exists entirely without attributes and without any connection to or relation with the world, but rather Dasein, a state in which object and subject exist together in a composite form. The nature of human Being is thus understood as possessing a certain inherent “thrownness” (German: Geworfenheit) into the world.
Understandably, the latter interpretation of Being is popular among critics of liberalism. Heidegger himself was a supporter of National Socialism and today his thought deeply influences Russian anti-liberalism (particularly the the thought of Aleksandr Dugin). The rejection of Rawlsian liberalism is also a key component of noted Chinese pro-CCP political philosopher Jiang Shigong’s thought, though his rejection of Rawls may be understood as being of a somewhat less ontological nature than Dugin’s.
The argument between these two interpretations of human Being has so far been viewed as being fundamentally beyond rationalism and positivism, being instead understood as constituting an arbitrary choice that eventually manifests itself in (geo)politics as Carl Schmitt’s friend-enemy distinction. The argument presented here is a bold attempt to bridge the gap between this debate and the realm of positivism.
The Self-Indication Assumption
The Self-Indication Assumption (SIA) is a rebuttal to the Doomsday argument that objects to its uniform prior. Instead, SIA posits that the probability of an individual existing at all depends on how many humans will ever exist. Thus the fact that an observer exists provides evidence that the total number of observers is high. Or, to directly quote Nick Bostrom:
Given the fact that people exist, people should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.
It can be shown that this argument perfectly cancels the Doomsday argument in the case of a vague prior. The problem with the argument, however, is that it requires either reasoning from a prior state in which existence is not yet certain or from a prior state in which it is uncertain in which universe (within a multiverse) one will be placed. While the latter option is mildly more palatable (and has hence convinced some philosophers), I shall posit that both whether one exists and in which universe (i.e., in which casually disjoint domain) one exists are extremely existential properties, meaning that reasoning from a prior state before they are specified requires (Rawlsian) liberalism to be true.
Moreover, it is also worth noting that we have no evidence of the multiverse existing. If we argue that the multiverse must exist due to either the preference for larger populations in SIA or the fine-tuned nature of physical laws, this argument would require the application of reasoning from a prior state in which existence is not yet certain. Invocation of the multiverse, without external evidence of its existence, thus only provides the illusion of circumventing the need for reasoning from a prior state in which existence is not yet certain.
The Self-Sampling Assumption
The Self-Sampling Assumption (SSA) is the alternative to the SIA. In this case:
All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
This very straightforwardly leads to the Doomsday argument. It is also compatible with notions of human Being that are skeptical of the existence of prior states that do not contain existentially important information. Indeed, the prior state (from which one reasons) becomes the state prior to knowing one’s absolute birth rank. The basic idea of the Doomsday argument is that the probability density of one’s fractional position remains uniformly distributed even after learning one’s absolute position. The basic argument I am making here is that absolute birth position is to be regarded as a far more incidental (i.e., far less existential) piece of information about a person relative to existence vs. non-existence (or even existence in one universe vs. existence in another parallel universe). Thus the prior state can be regarded as already possessing “thrownness” into the world (i.e., you know that you will exist and that your existence will take place in this world).
The Argument
My argument relies on two basic assumptions:
The Doomsday argument is methodologically correct.
The Self-Indication Assumption is true if and only if the (Rawlsian) liberal interpretation of human Being is correct and the Heideggerian interpretation is wrong.
Note that the second assumption is a single condition (rather than two conditions) since we regard the Rawlsian and Heideggerian interpretations as being both mutually exclusive and collectively exhaustive.
Under SSA with a vague prior, the Doomsday argument implies that
Where is the absolute birth rank of the observer and is the cumulative total population reached by humanity.
Meanwhile, under SIA with a vague prior, it is implied that has to be infinite. This is because the vague prior is equivalent to randomly picking a number larger than . Since 100% of numbers larger than are infinite, we are certain that if liberalism is correct, then must be infinite. Thus if is finite, liberalism cannot be correct.
However, liberals need not despair yet, since (as we shall investigate in the next section) the conclusion given by the vague prior is unstable with respect to doubt regarding the validity of that prior. Indeed, if the vague prior used implies that it is extremely likely that the total cumulative population will be either no more than a few trillion or literally infinite, then a result of, for example, should not be interpreted as the tiny chance of reaching in the standard Doomsday argument, but rather as strong evidence that our prior was wrong. Note that criticism of SIA on the grounds that it fails under the vague prior is nothing new.
Using a Capped Prior
To restore coherence to our argument under SIA, we shall apply a cap to our vague prior. The cap is supposed to stand for the largest possible number of humans that can be born and is thus supposed to be extremely high. If an appropriately high cap is chosen it is likely that our argument under SSA will remain effectively unchanged at
However, in the case of SIA we now have
Where is our cap.
Now we shall proceed to derive the probability of the correctness of Heidegger’s interpretation of Being. To do so, we shall slightly shift our notation. We shall now denote the observed total cumulative population achieved as . We shall denote the probability that Heidegger is correct as . Given that SSA is true if Heidegger is correct, we may write,
We now require a prior for . I am going to pick 50%. This is not perfect, but given that both sides of the debate have been supported by some of the smartest philosophers in history, the outside view indicates that priors deviating far from 50% (e.g., 1% or 99%) are pretty unreasonable. Likewise, there is still substantial disagreement among the smartest philosophers about whether SIA is correct. It is also worth noting that our current world is pretty evenly split on the issue in geopolitical terms, with the West accepting Rawls (in part explicitly, in part implicitly), while Russia takes Heidegger’s side explicitly, and China does so implicitly. (Disclaimer: the last sentence expresses my personal opinion and is not important to the broader argument.)
We may now combine our SIA and SSA versions as a probability-weighted sum to obtain,
Then we may use Bayes’ rule to solve for ,
Finally, we determine the probability of the correctness of liberalism, denoted as , using the properties of mutual exclusivity and collective exhaustion,
If we plug in and we get the following graph:
For , the probability of liberalism being true would be about 2% if humanity ended tomorrow and about 96% if it endured up to a cumulative 100 trillion people.
Thankfully, the relationship is reasonably robust in . The no-information point (i.e., the value of for which ) is given by
The no-information point ranges from the low to the high single-digit trillions as we increase from the very low value of to the very high value of .
Estimation Before Extinction
Sticking with our capped prior example, we shall now investigate the Bayesian estimate for the probability that liberalism is correct, given that the cumulative population has progressed to a point at which humanity still endures.
The probability that a certain cumulative population C will be exceeded in the case of SSA is
Note again that the integrals from to and from to will only differ to a negligible degree since only a negligible proportion of probability is between and in the standard Doomsday argument.
The probability that a certain cumulative population C will be exceeded in the case of SIA is
We may now solve for the probability of liberalism being true, already using ,
Applying Bayes rule,
Once again, using we get:
As expected, we can see that at our current position of 100 billion, we are at our prior 50% probability of liberalism being correct. However, this probability now more rapidly increases to reach about 90% and 99% for populations of a trillion and 10 trillion, respectively. The reason for this more rapid increase is the fact that the standard Doomsday argument basically runs out of probability at population levels above a trillion, meaning that populations above that not only have a low probability in the standard Doomsday scenario but also lead us to assume that the population will keep growing for far longer (since it is likely at that point that the standard Doomsday argument is false). This also means that while our post-extinction argument was reasonably robust in , our pre-extinction argument works for arbitrarily high values of , allowing us to return to our uncapped prior, as we shall investigate in the next section.
Returning to the Uncapped Prior
We want to investigate the behavior of the probability of liberalism in the limiting case of .
Applying L’Hôpital’s rule,
Graphically this becomes,
This is basically the same behavior we have seen before since the limit was already closely approached when using .
Despite the mathematics working out neatly, I would like to caution once again against using an uncapped prior in combination with SIA. The problem is that while we are about 99% certain that liberalism is true at , that is only because we are also about 99% certain that will reach literally infinity. This seems very unreasonable.
Implications (for EAs)
I have to admit that the implications for political thought may be more significant than those for EAs, however, since I am posting this on LessWrong, I will do my best to keep it relevant.
Firstly, near-term (e.g., within the next century) human extinction may provide moderately strong (though probably far from definitive) evidence for the incorrectness of political liberalism. This may seem somewhat irrelevant since it would only become apparent after extinction; however, there may be a scenario where it could still matter. This would be the case in which a transition occurs in which humans turn into post-humans who are sufficiently different from humans to no longer fall into their reference class.
I am pretty skeptical of this. What is the chance that there wouldn’t be a remaining minority of legacy humans? What is the chance that one could be sure that such a minority would never arise in the future? Nevertheless, if such a transition were to take place in the coming decades, humans might have some evidence that liberalism is wrong as the transition takes place, and this may influence their actions.
The second point is perhaps more interesting. If humanity survives for a long time (and still greatly increases its population as it ventures out to colonize the galaxy), it may rake up a very great cumulative population. If humanity also manages to cure aging/death in the coming decades, it is possible that some humans with birth rank circa. may still be alive at a late stage in this process. These humans would then have high confidence that liberalism is true, which they would not share with most humans alive then. If these pioneer humans (who would be a tiny minority) were to hold unusually large amounts of power/status relative to their number then this could have interesting implications.
Rebuttals to Possible Objections
Finally, I shall offer my responses to several possible objections that I thought of (when attempting to attack my own idea) but have not yet addressed so far. Of course, I am posting this to get further objections to either rebut or accept.
Wait, wouldn’t the Liberal and Heideggerian interpretations have different Kolmogorov complexities?
(In case it isn’t obvious, the relevance of this is that if the Kolmogorov complexities were different then the 50% prior for liberalism could no longer be justified.)
No, they would not. The reason for this is that the worlds that they describe are ontically identical. The difference between the two interpretations does not consist in the difference of beings or things, but rather that of the nature of the Being of those beings and things. See ontic-ontological distinction for more details.
Let us investigate some naïve arguments. Say one were to argue that the Kolmogorov complexity of the liberal interpretation is higher because it also contains the (even earlier) prior. This is wrong because the prior is not a thing that exists in either case but rather a designation of the nature of the Being of humans. Likewise, if one were to argue that the Kolmogorov complexity of the liberal interpretation is lower because we now only have to specify the prior, this would be wrong because the wall of existence vs. non-existence stands between the prior and the world, meaning that we cannot take a stand of indifference and have to specify the world separately.
Wait, wouldn’t this violate/resolve Hume’s is-ought Problem?
No, it would not. The reason for this is that while there are implications for certain arguments for ethical positions (such as that from Rawls’ original position), no argument emerges for the position that one ought to care about the validity of those arguments. A liberal could still support liberal ethics even if arguments for them have been shown to be false, while likewise, an anti-liberal could still support anti-liberal ethics even if liberalism has been shown to be true. On the most general level, the choices between the moral and the immoral, the just and the unjust, and the fair and the unfair are still arbitrary, though perhaps some insight may be gained regarding the nature of justice.
Wait, what about Post-Humans?
As implied by previous arguments I have made, I do not object to objections to the Doomsday argument on the grounds that it provides evidence only for humans and not for sufficiently different post-humans (who have fallen outside of the human reference class). In fact, I believe those objections to be true, and both the SSA and SIA (i.e., the Heideggerian and liberal) versions of the Doomsday argument presented here should not be interpreted as containing evidence regarding the future post-human population.
Wait, wouldn’t this violate Aumann’s Agreement Theorem?
I have previously described a scenario in which some humans from our era survive into a future in which a far greater number of humans have existed. These humans would then have different beliefs on the probability of liberalism being true from most other humans in the future. If they fail to resolve their disagreement through argumentation this would naïvely appear to be a violation of Aumann’s agreement theorem.
However, it actually is not a violation since Aumann’s agreement theorem applies only to the ontic realm, and their disagreement would be of an ontological nature. In other words, they would not disagree regarding any being or thing in the world but would rather disagree on the nature of beings and things that makes them beings and things. Stated even more simply, they would not disagree on anything concerning objective reality.
Wait, wouldn’t the Heideggerian interpretation reject even the original Doomsday Argument?
OK, I am going to be honest, this is the most serious objection I could think of. Perhaps, the absolute birth rank is only of an extremely incidental character so long as one conflates the knowledge of the absolute birth rank with the absolute birth rank itself. It could be argued that having an absolute birth rank of 50 billion instead of 100 billion would necessitate being in a medieval society rather than a modern industrial one. On the other hand, the fact that it took 100 billion people to get to a civilization as advanced as ours is an a priori highly non-obvious fact that took a large amount of historical, demographic, and anthropological research to determine. In fact, in the 1970s, the estimate was about 50 billion. This is despite the fact that a lot of (but apparently not quite enough) research had been done on the question at that point. Substantial further updates to the 100 billion estimate seem possible, if not likely. Indeed, with little information about the world and little thought, there is little knowledge regarding one’s absolute birth rank. The difference to the prior state could thus be understood as constituting the incidental knowledge of how many people were necessary to reach the current stage of civilization. (However, I am not sure if this last point isn’t simply conflating the knowledge of the absolute birth rank with the absolute birth rank itself again.)
The relevance of the previous argument notwithstanding, I believe that it is likely that even if we link absolute birth rank to something like technological development, we may still find ourselves facing a largely non-existential difference. Many of the factors tied to the specific human (e.g., gender, inherent abilities, tastes, relative wealth, relative social status, etc.) need not necessarily be left variable in the prior. These are the factors that Rawls was concerned about. Furthermore, I believe we have clearly seen changes within single people’s lifetimes that are at least on the same order of magnitude as those of the difference between technological levels at different stages in human history. For example, there are a sizeable number of people who went from being peasant children in pre-industrial China to being moderately wealthy adults in post-industrial America without losing the coherence of their selfhood.
We may also note that we should not tie ourselves too closely to Heidegger since his interpretation of human Being may be regarded as extreme. (This may be further evidenced by his previously mentioned extreme politics.) I would judge a degree of puritanism regarding priors that leads one to simply reject the Doomsday argument without needing further arguments as being obviously wrong. Whether Heidegger would have supported this remains unknowable and should be regarded as irrelevant.
Finally, we should note that even if you believe that a non-Rawlsian interpretation would have to reject the original Doomsday argument, this would not lead us to reject the idea that future population may give insight into the validity of the foundations of political liberalism, as long as a capped prior is used. Rather, it would invert the argument and force us to compare a Heideggerian no-evidence (i.e., pure prior) state with a liberal SIA Doomsday argument. In this case, the probability of liberalism would fall as the cumulative population increases. The Heideggerian interpretation would only be probable if the cumulative population were to reach a level close to the cap of the prior on a logarithmic scale. If an uncapped prior were to be used, both interpretations would imply certainty of an infinite cumulative population and thus be (obviously) fallacious.
Conclusion
There is a strong similarity in the questions regarding the reasoning from priors in political philosophy and the Doomsday argument. The difference is that the former occupies the realm of the political and ethical, whereas the latter makes predictions about the future that may be validated or invalidated in the further course of human history. This may provide a promising crack in the wall separating the ontic and ontological realms.
To me, the correspondences between anthropics and political philosophy seem to be:
SIA corresponds to total utilitarianism (caring more about universes with more population)
SSA corresponds to average utilitarianism (caring about average observers within one’s universe)
Min-max (i.e. act as if you expect to be the worst-off observer) corresponds to Rawlsianism
The difference between Rawlsianism and utilitarianism is critical; if you assume an original position and SSA-like random selection, you get utilitarianism, not Rawlsianism (as has been argued). The min-maxing is critical to Rawls’s theory and he goes to great lengths to distinguish his theory from both utilitarianism and softer forms of min-maxing (continuous risk-aversion, or diminishing marginal “VNM utility” in “personal units of utility”). Bridging Rawlsianism with the overall utilitarian-ish tradition that produces anthropic theories requires justifying min-maxing as a heuristic. (To me, the best justification for min-maxing is that people who don’t optimize for coalitional politics will tend to be losers in coalition politics, and philosophers are supposed to pursue truth, often at the expense of coalitional positioning.)
It seems to me that there isn’t an obvious correspondence between SSA and Heideggerianism. Rather, SSA seems to correspond more strongly to eliminativist, physicalist, anti-phenomenological theories that attempt to reason from an objective view-from-nowhere, with the argument that you learn nothing “upon waking up”. It seems to me that a possible similarity is that the average utilitarianism implied by SSA can trend anti-natalist (since one prefers a world with fewer worse-than-average lives), which has some correspondence with Nazi population control, and which rejects liberal arguments for markets improving overall utility, as markets can increase population at the expense of average welfare. This relates to the Marxist concern that capitalism may lead to workers being paid only subsistence wages in the long term, even if there are many workers.
SSA is perhaps more “parochial” in that, by adjusting one’s reference class, one can identify with different sets of observers, whereas this choice is inessential in SIA; this is more compatible with nationalism. However, it would not in general lead to increasing the population of one’s own reference class, instead focusing on average welfare.
In my recent post I compared the subjective, rather than view-from-nowhere, prior of an anthropic agent to the Kantian a priori; Rawls’s philosophy is based on Kant’s.
Alright, firstly, thank you so much for taking the time to reply!
I think you may have misunderstood my main point. (But I also think that there’s a chance that you have correctly understood my point and disproven it but that I’m too stupid or uninformed to have noticed.)
My basic point:
Total utilitarianism, average utilitarianism, and Rawlsianism are all normative theories. They are concerned with how moral agents ought to act.
SSA and SIA are positive theories. They make actual predictions about the future. This means that in the fullness of time, we should know whether SSA or SIA is correct.
Rawlsianism, though a normative theory, seeks to justify itself rationally through the original position thought experiment. This thought experiment requires a good degree of “non-parochiality,” which makes me believe that to accept it one also would have to accept SIA. However, since SIA is a positive theory, this means that Rawlsianism must also be a positive, not a normative, theory. Take this as a paradox, if you will.
As for total and average utilitarianism, I don’t think that they necessitate either SSA or SIA being true. I believe I kind of vaguely understand what you mean by “SIA corresponds to total utilitarianism”: in the case of SIA, our reference class is all possible observers, and in the case of total utilitarianism, we care about the expected value over all possible futures. However, it seems to me that this conflates the positive concept of beliefs about the future population with the normative concept of caring more about universes with larger populations. In other words, someone who believes in total utilitarianism need not necessarily believe that SIA is true because of their belief in total utilitarianism. However, I fear that the vagueness I alluded to previously is due to my lack of understanding, not due to the vagueness of your point. Please enlighten me with regards to any more concrete meaning of the word “corresponds” as you use it.
As for Heideggerianism, I agree that it does not necessarily “correspond” to SSA in any way. However, I do feel that it is likely incompatible with SIA. As noted in my post, I am a bit uncertain about the consequences of Heideggerianism, so I am happy to change my argument to a more general “parochial” vs. “non-parochial” form, using the language you have suggested.
Finally, you referred to
This leads me to suspect that there is a well-established connection between moral philosophy and anthropic reasoning that flows from the former to the latter. Please let me know if that is the case.
I see what you mean where you’re testing a key assumption of liberalism or Heideggerianism, not the theory as a whole. Rawlsianism, however, includes min-maxing as well, which seems more normative.
If you are behind a veil of ignorance and optimize expected utility under SSA, then you will want the average utility in your universe to be as high as possible.
SIA is a bit more complicated. But we can observe that SIA gives the same posterior predictions if we add extra observers to each universe so they have the same population, and these observers observe something special that indicates they’re an extra observer, and their quality of life is 0 (i.e. they’re indifferent to existing). In this case, someone reasoning behind the veil of ignorance and maximizing expected utility will want total utility to be as high as possible; it’s better for someone to exist iff their personal utility exceeds 0, since they’re replacing an empty observer.
I see that Rawlsianism requires an original position. But such an original position is required for both SSA and SIA. To my mind, the difference is that the SSA original position is just a prior over universes, while the SIA original position includes both a prior over universes and an assumption of subjective existence, which is more likely to be true of universes with high population. Both SSA and SIA agree that you aren’t in an empty universe a priori, but that’s an edge case; SIA scales continuously with population while SSA just has a step from 0 to 1.
Heidegerrianism doesn’t seem to believe in an objective universe in a direct sense, e.g. the being of tools is not the being of the dynamical physical system that a physicalist would take to correspond to the tool, because the tool also has the affordance of using it. So it’s unclear how to reconcile Heideggerianism with anthropics as a whole. I’m speculating on various political correspondences but I don’t see how to get them from Heideggerianism and anthropics, just noting possible similarities in conclusions.
I agree that the min-maxing of Rawlsianism is purely normative. What I was getting at was the veil of ignorance itself. Perhaps, it is worth explicitly saying, “oops, I forgot about that,” for this point.
Yes, I agree. However, I still feel that if you are willing to believe in the veil of ignorance, you should also believe in SIA.
Again, I agree. However, I feel that the veil of ignorance needs both the prior over universes and the prior before the assumption of subjective existence since it is willing to modify existential properties of the observer, without which the observer would not have the same subjective existence.
This is the very reason I believe that it should be OK with the “prior over universes” present in SSA. If reality is not objective, then it is easier to understand this prior as “uncertainty regarding the population of this universe” rather than “the potential of being in another universe which has a different population.” The potential universes and the actual universe become ontologically more similar since they are both non-objective. I have to admit that this is the point I am least certain of, though.
Sounds like a crux. I think this is obviously not the case, though I fail to formulate a sense of “positive theories” that would turn this impression into a clear argument.
What I meant by “positive theories” is “theories that can be falsified.” I think it would be fine to literally call them “scientific theories.” (I don’t think there is anything particularly deep here; just Karl Popper’s thoughts on science.) For example, if the total human population ends up being 1020 then I would consider that as having falsified SSA. In a sense, the future of human history becomes a science experiment that tests SSA and SIA has hypotheses. Perhaps I should have relabeled them SSH and SIH.
This stands in contrast with normative statements like “murder is wrong,” which cannot be falsified by experiment.
We can do a ritual that falsifies, but that doesn’t by itself explain what’s going on, as the shape of justification for knowledge is funny in this case. So merely obtaining some knowledge is not enough, it’s also necessary to know a theory that grounds the event of apparently obtaining such knowledge to some other meaningful fact, justifying or explaining the knowledge. As I understand them, SSA vs. SIA are not about facts at all, they are variants of a ritual for assigning credence to statements that normally have no business having credence assigned to them.
Just as bayesian prior for even unique conspicuously non-frequentist events can be reconstructed from preference, there might be some frame where anthropic credences are decision relevant, and that grounds them in something other than their arbitrary definitions. The comment by jessicata makes sense in that way, finding a role for anthropic credences in various ways of calculating preference. But it’s less clear than for either updateful bayesian credences or utilities, and I expect that there is no answer that gives them robust meaning beyond their role in informal discussion of toy systems of preference.
Yes, I think you are right. It might be best for me to abandon the idea entirely.
Sorry for wasting everybody’s time.
Could you elaborate on this? ETA: Are you saying that philosophers tend to be at the bottom of the social pecking order, so they tend to support views which support people at the bottom having nice things? Interesting hypothesis. But I think prominent philosophers at least are usually decently high in the pecking order. I have been toying with the idea that we could derive minimax(or something like it) from bargaining in situations with an offense-defence imbalance(arguably applies to humans since the invention of weapons), so low-rank people have the option of spitefully nuking everything if their position is bad enough.
While famous philosophers tend to be decently high in social ranking as determined by education level, money, etc, they’re also likely to have people gang up on them. E.g. Socrates being killed, Galileo being condemned, Spinoza being excommunicated… And there are a lot of good philosophical thinkers we haven’t heard of, who might have been ganged up on as well.
Being a philosopher could in general be considered a form of neurodivergence. People who think and act differently from others could use justice-related protections that also protect other people who think and act differently from others. Updating all the way to min-max is a bit much, but there’s something to the heuristic. Over my life I’ve updated towards thinking that truth-seeking flags someone for coalitional scapegoating and that thinking about coalitions is important to maintaining a truth-seeking orientation.