I think the starting point is to examine the moral progress that’s been made so far in history, and try to figure out how it happens. The best stuff I’ve read on this so far is from Pinker (The Better Angels of Our Nature and parts of Enlightenment Now).
I haven’t read Pinker’s “Better Angels” but I’ve heard that it is both (1) hopeful and (2) potentially has some cherry-picked data? (The caveman stuff I think is very plausible. It is the post-1800 stuff that I think might be cherry-picked.)
Do you think I should read it directly and trust that the data and stats are clean, or should it be mixed somehow with other content. I’d love to get a “highlights and lowlights” summary by someone with epistemic rigor such that I could save time and avoid reading it myself <3
Before that book came out, my working theory was less optimistic, and more straightforward and grew out of the intellectual tradition of Lewis Richardson’s pacificism-motivated war (or peace) studies work, where a stationary statistical distribution is assumed (thus we assume no moral progress?) unless it can be positively proven and attributed to something.
Here is a modern example of research on the causal structure of war which suggests that if an-event-plausibly-describable-as-WW3 does not occur until after roughly 2103-2163, then we could decisively say that the post-WW2-Long-Peace is a true deviation from historical trends based on some enduring change to the statistical distribution from which actual historical wars may have been sampled since the invention of muskets and cannons and so on.
Aaron Clauset’s discussion section is interesting here:
The agreement between the historical record of interstate wars and Richardson’s simple model of their frequency and severity is truly remarkable, and it stands as a testament to Richardson’s lasting contribution to the study of violent political conflict.
There are, however, a number of caveats, insights, and questions that come out of our analysis. For instance, Richardson’s Law—a power-law distribution in conflict event sizes—appears to hold only for sufficiently large “deadly quarrels,” specifically those with 7061 or more battle deaths. The lower portion of the distribution is slightly more curved than expected for a simple power law, which suggests potential differences in the processes that generate wars above and below this threshold.
With only 95 conflicts and a heavy-tailed distribution of war sizes, there are relatively few large wars to consider. This modest sample size surely lowers the statistical power of any test and is likely partly to blame for needing nearly 100 more years to know whether the long peace pattern is more than a run of good luck under a stationary process. One could imagine increasing the sample size by including civil wars, which are about three times more numerous than interstate wars over 1823–2003. Including these, however, would confound the resulting interpretation, because civil wars have different underlying causes [10, 43, 48], and because the distribution of civil war sizes is shifted toward smaller conflicts and exhibits relatively fewer large ones [24].
My own tendency is just to treat nearly all violence as “just violence” and so if two guys get in a fight in a bar, and their buddies jump in and start fighting “anyone who isn’t a buddy of mine” (no personal beefs, but rather treating others merely as members of an enemy group), and 3 people die in that bar fight, then I’d model that as a battle with 3 casualties.
If there’s a shooting later that kills 7, inspired by that fight, that’s another battle with 7 casualties, which is arguably a continuation of the “war”(?)… and so on. Most people don’t agree with my coding preference here, possibly for good reasons that I just haven’t learned yet? Turchin treats violent protests separately from wars (bigger) or crime (smaller), but this might be essentially data pragmatism?
Censoring the smaller events from the database of “wars” based on semantics and treating the lack of “wars so-defined” as a lack of “the bad violence that we’re officially studying (not the other violence that we are NOT officially studying)” seems questionable to me.
For me, the small scale version generates vivid examples from common experience that illustrate my current best guess as to why “conflict sizes” have a distribution similar to “earthquake sizes”!
Maybe adding the little events in somehow would just recover Richardson’s Law more strongly and/or it might show a clearer post-WW2-change? Maybe? A lot of sins can hide in data cleaning, data-gap imputing, and event coding choices.
Based on background knowledge and interest like this, I’m curious if Pinker looked at the literature and found evidence that the real story was quite clear, or if he just sort of… uh… wrote something that sounded good?
In my model, the first step is to have a clear theory of causation that is scientifically coherent, and then if the theory of causation is adequate you can “imagine an intervention and mentally turn the gears” and then see if the results would be “less war” or what. Then the cheapest imaginable intervention that buys the most predicted goodness should perhaps be tried? And that would be “optimistic rational progress at work!” <3
I don’t think the data is cherry-picked, but you could argue with some of his statistical analysis. He lays it all out pretty clearly though, so the book is valuable to read even if you disagree in the end.
He covers violent crime (which would include bar fights) as well.
I think the starting point is to examine the moral progress that’s been made so far in history, and try to figure out how it happens. The best stuff I’ve read on this so far is from Pinker (The Better Angels of Our Nature and parts of Enlightenment Now).
I haven’t read Pinker’s “Better Angels” but I’ve heard that it is both (1) hopeful and (2) potentially has some cherry-picked data? (The caveman stuff I think is very plausible. It is the post-1800 stuff that I think might be cherry-picked.)
Do you think I should read it directly and trust that the data and stats are clean, or should it be mixed somehow with other content. I’d love to get a “highlights and lowlights” summary by someone with epistemic rigor such that I could save time and avoid reading it myself <3
Before that book came out, my working theory was less optimistic, and more straightforward and grew out of the intellectual tradition of Lewis Richardson’s pacificism-motivated war (or peace) studies work, where a stationary statistical distribution is assumed (thus we assume no moral progress?) unless it can be positively proven and attributed to something.
Here is a modern example of research on the causal structure of war which suggests that if an-event-plausibly-describable-as-WW3 does not occur until after roughly 2103-2163, then we could decisively say that the post-WW2-Long-Peace is a true deviation from historical trends based on some enduring change to the statistical distribution from which actual historical wars may have been sampled since the invention of muskets and cannons and so on.
Aaron Clauset’s discussion section is interesting here:
My own tendency is just to treat nearly all violence as “just violence” and so if two guys get in a fight in a bar, and their buddies jump in and start fighting “anyone who isn’t a buddy of mine” (no personal beefs, but rather treating others merely as members of an enemy group), and 3 people die in that bar fight, then I’d model that as a battle with 3 casualties.
If there’s a shooting later that kills 7, inspired by that fight, that’s another battle with 7 casualties, which is arguably a continuation of the “war”(?)… and so on. Most people don’t agree with my coding preference here, possibly for good reasons that I just haven’t learned yet? Turchin treats violent protests separately from wars (bigger) or crime (smaller), but this might be essentially data pragmatism?
Censoring the smaller events from the database of “wars” based on semantics and treating the lack of “wars so-defined” as a lack of “the bad violence that we’re officially studying (not the other violence that we are NOT officially studying)” seems questionable to me.
For me, the small scale version generates vivid examples from common experience that illustrate my current best guess as to why “conflict sizes” have a distribution similar to “earthquake sizes”!
There is a potential in BOTH social-animal conflict AND geophysical avalanches for common small events and then also rare cascading amplification of released tension.
Maybe adding the little events in somehow would just recover Richardson’s Law more strongly and/or it might show a clearer post-WW2-change? Maybe? A lot of sins can hide in data cleaning, data-gap imputing, and event coding choices.
Based on background knowledge and interest like this, I’m curious if Pinker looked at the literature and found evidence that the real story was quite clear, or if he just sort of… uh… wrote something that sounded good?
In my model, the first step is to have a clear theory of causation that is scientifically coherent, and then if the theory of causation is adequate you can “imagine an intervention and mentally turn the gears” and then see if the results would be “less war” or what. Then the cheapest imaginable intervention that buys the most predicted goodness should perhaps be tried? And that would be “optimistic rational progress at work!” <3
I don’t think the data is cherry-picked, but you could argue with some of his statistical analysis. He lays it all out pretty clearly though, so the book is valuable to read even if you disagree in the end.
He covers violent crime (which would include bar fights) as well.