Availability
The availability heuristic is judging the frequency or probability of an event by the ease with which examples of the event come to mind.
A famous 1978 study by Lichtenstein, Slovic, Fischhoff, Layman, and Combs, “Judged Frequency of Lethal Events,” studied errors in quantifying the severity of risks, or judging which of two dangers occurred more frequently. Subjects thought that accidents caused about as many deaths as disease; thought that homicide was a more frequent cause of death than suicide. Actually, diseases cause about sixteen times as many deaths as accidents, and suicide is twice as frequent as homicide.
An obvious hypothesis to account for these skewed beliefs is that murders are more likely to be talked about than suicides—thus, someone is more likely to recall hearing about a murder than hearing about a suicide. Accidents are more dramatic than diseases—perhaps this makes people more likely to remember, or more likely to recall, an accident. In 1979, a followup study by Combs and Slovic showed that the skewed probability judgments correlated strongly (0.85 and 0.89) with skewed reporting frequencies in two newspapers. This doesn’t disentangle whether murders are more available to memory because they are more reported-on, or whether newspapers report more on murders because murders are more vivid (hence also more remembered). But either way, an availability bias is at work.
Selective reporting is one major source of availability biases. In the ancestral environment, much of what you knew, you experienced yourself; or you heard it directly from a fellow tribe-member who had seen it. There was usually at most one layer of selective reporting between you, and the event itself. With today’s Internet, you may see reports that have passed through the hands of six bloggers on the way to you—six successive filters. Compared to our ancestors, we live in a larger world, in which far more happens, and far less of it reaches us—a much stronger selection effect, which can create much larger availability biases.
In real life, you’re unlikely to ever meet Bill Gates. But thanks to selective reporting by the media, you may be tempted to compare your life success to his—and suffer hedonic penalties accordingly. The objective frequency of Bill Gates is 0.00000000015, but you hear about him much more often. Conversely, 19% of the planet lives on less than $1/day, and I doubt that one fifth of the blog posts you read are written by them.
Using availability seems to give rise to an absurdity bias; events that have never happened are not recalled, and hence deemed to have probability zero. When no flooding has recently occurred (and yet the probabilities are still fairly calculable), people refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et al. suggest underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred . . . Men on flood plains appear to be very much prisoners of their experience . . . Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.”1
Burton et al. report that when dams and levees are built, they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions.2 While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases.
The wise would extrapolate from a memory of small hazards to the possibility of large hazards. Instead, past experience of small hazards seems to set a perceived upper bound on risk. A society well-protected against minor hazards takes no action against major risks, building on flood plains once the regular minor floods are eliminated. A society subject to regular minor hazards treats those minor hazards as an upper bound on the size of the risks, guarding against regular minor floods but not occasional major floods.
Memory is not always a good guide to probabilities in the past, let alone in the future.
1 Howard Kunreuther, Robin Hogarth, and Jacqueline Meszaros, “Insurer Ambiguity and Market Failure,” Journal of Risk and Uncertainty 7 (1 1993): 71–87.
2 Ian Burton, Robert W. Kates, and Gilbert F. White, The Environment as Hazard, 1st ed. (New York: Oxford University Press, 1978).
- Four mindset disagreements behind existential risk disagreements in ML by 11 Apr 2023 4:53 UTC; 136 points) (
- The Substitution Principle by 28 Jan 2012 4:20 UTC; 112 points) (
- Doublethink (Choosing to be Biased) by 14 Sep 2007 20:05 UTC; 99 points) (
- Dunbar’s Function by 31 Dec 2008 2:26 UTC; 68 points) (
- Four mindset disagreements behind existential risk disagreements in ML by 11 Apr 2023 4:53 UTC; 61 points) (EA Forum;
- Gears Level & Policy Level by 24 Nov 2017 7:17 UTC; 61 points) (
- Why is the Future So Absurd? by 7 Sep 2007 8:42 UTC; 52 points) (
- Exploring the Streisand Effect by 6 Jul 2020 7:00 UTC; 46 points) (EA Forum;
- Abstracted Idealized Dynamics by 12 Aug 2008 1:00 UTC; 37 points) (
- The True Face of the Enemy by 12 Jan 2021 10:03 UTC; 35 points) (
- My Strange Beliefs by 30 Dec 2007 12:15 UTC; 29 points) (
- A Premature Word on AI by 31 May 2008 17:48 UTC; 26 points) (
- One example of how LLM propaganda attacks can hack the brain by 16 Aug 2023 21:41 UTC; 24 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- Against the standard narrative of human sexual evolution by 23 Jul 2010 5:28 UTC; 15 points) (
- Availability Heuristic Considered Ambiguous by 10 Jun 2016 22:40 UTC; 15 points) (
- Why would we think artists perform better on drugs ? by 30 Oct 2011 14:01 UTC; 14 points) (
- 28 Jun 2022 23:03 UTC; 13 points) 's comment on On Deference and Yudkowsky’s AI Risk Estimates by (EA Forum;
- Charting Deaths: Reality vs Reported by 30 Mar 2018 0:50 UTC; 13 points) (
- Lighthaven Sequences Reading Group #11 (Tuesday 11/19) by 13 Nov 2024 5:33 UTC; 12 points) (
- Consider Representative Data Sets by 6 May 2009 1:49 UTC; 12 points) (
- 19 Jul 2022 3:35 UTC; 12 points) 's comment on What should you change in response to an “emergency”? And AI risk by (
- An intervention to shape policy dialogue, communication, and AI research norms for AI safety by 1 Oct 2017 18:29 UTC; 9 points) (EA Forum;
- 3 Mar 2011 1:31 UTC; 9 points) 's comment on Are You a Paralyzed Subordinate Monkey? by (
- [SEQ RERUN] Availability by 19 Aug 2011 4:29 UTC; 8 points) (
- 31 Mar 2012 2:59 UTC; 6 points) 's comment on Rationality Quotes March 2012 by (
- Internal Availability by 8 Oct 2012 6:19 UTC; 5 points) (
- 17 Apr 2009 12:02 UTC; 4 points) 's comment on The Trouble With “Good” by (
- 18 Mar 2012 21:00 UTC; 4 points) 's comment on Calibrate your self-assessments by (
- 22 Nov 2011 18:19 UTC; 4 points) 's comment on 5-second level case study: Value of information by (
- 18 Dec 2012 13:53 UTC; 3 points) 's comment on More Cryonics Probability Estimates by (
- 14 Oct 2014 10:58 UTC; 3 points) 's comment on Open thread, Oct. 13 - Oct. 19, 2014 by (
- 29 Apr 2011 1:44 UTC; 3 points) 's comment on Pretending to be Wise by (
- 22 Apr 2009 23:02 UTC; 3 points) 's comment on The Sin of Underconfidence by (
- 21 Jan 2012 23:47 UTC; 3 points) 's comment on Quixey Challenge—Fix a bug in 1 minute, win $100. Refer a winner, win $50. by (
- Meetup : Frankfurt Meet-Up by 4 Jul 2015 16:45 UTC; 2 points) (
- Meetup : Frankfurt Meetup Revival by 20 Jun 2015 11:00 UTC; 2 points) (
- 9 May 2015 2:22 UTC; 2 points) 's comment on Rationality Reading Group: Fake Beliefs (p43-77) by (
- Meetup : West LA: Availability Cascades and Risk Regulation by 13 Jun 2015 20:09 UTC; 2 points) (
- 7 Jun 2012 0:49 UTC; 0 points) 's comment on Welcome to Less Wrong! (2012) by (
- 8 Mar 2012 22:37 UTC; 0 points) 's comment on Conjunction fallacy and probabilistic risk assessment. by (
Hmm. Usually you can get a strong indicator of the probability of future hazards of a given size by using frequentist statistics, e.g. by finding a statistical distribution that seems to constitute a good, simple, and logically reasonable (matching the causual structure of the underlying phenomenon). You can, for instance, estimate as I do that the distribution of historical flu risks in particular or epidemic risks in general is heavily weighted towards a few large events, and that the probabilities of events many times larger than the largest historical events can be calculated with useful precision. Much more controvercially, I see the distribution of technological innovations as a function of complexity as evidence that China and India are not good candidates for developing molecular nanotech. OTOH, the flooding example with dams gives a counter-example where the useful data from which the distribution could be inferred has been removed.
The following does not invalidate the argument in the posting, but:
Subjects thought that accidents caused about as many deaths as disease
I want to eliminate aging and death as much as anyone, but I would say that many deaths from disease in old age should be filed under “old age” rather than “disease.” I wonder how the statistics work out if we look at it that way. (Or maybe Lichtenstein et al did so already.)
I’m surprised to hear that dams increase average annual damages. Does the Burton book explain how that works? Is it reduced preparation increasing the effect of the largest events?
Concerning age and death, the more recent links are not working for me right now, but here is the CDC with 2003 numbers: ftp://ftp.cdc.gov/pub/ncipc/10LC-2003/PDF/10lc-2003.pdf
Until age 34, accidents are winning, with intentional injury (suicide and homicide) taking second and third. 35-44, accidents are still #1, but cancer and heart disease are each close so disease wins. Cancer wins through 64, then heart disease takes over. Because disease reigns supreme 55+, unintentional injuries fall to #5 overall, and intentional injuries fall off the chart entirely.
If you are talking about young people, yes, accidents win. The main component of that is traffic crashes; in older adults, falls start to come in. Suicide beats homicide in every age category except 15-24 (and the very small 1-9 age group).
On a side note, it looks like the majority of deaths in the first year are things that might be classified as “stillborn” in another country or century. Those deaths in the <1 category rival all deaths from all other causes through age 14.
I imagine it is a lot easier to avoid an accident than avoid most modern diseases eg cancer. So it make sense to concentrate on the risks you can do most about.
people refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value.
How do they know it is heavily subsidized and priced far below an actuarially fair value?
Is it worth going to all the trouble of finding out?
I don’t think they realize this. Recently in my area some flood maps were updated to take account of new data suggesting increased risks, with subsequent increases in the subsidized flood insurance rates. The affected households raised a hue and cry, enlisting senators in their cause, and many got the increases reversed. Nothing in the rhetoric I read suggested that they realized they were already getting a great deal; instead, here and in the Florida articles I read (where the government objected to the remaining insurer increasing rates after recent hurricanes), the unstated assumption seemed to be that the rates were ‘unfair’ and profitable.
A house is hundreds of thousands of dollars, and the disruption to your life if it and its contents are destroyed is profound. In some place like Florida, it may be more likely than not that in your lifetime your house will be damaged or destroyed, especially given the suggestion that global warming will increase the variance of storms (and hence the occurrence of super-hurricanes). I think it is worthwhile!
Whilst it is true that it is a lot easier to avoid an accident than a disease, you can probably do more about your risk of dying of disease. For example, you could verse yourself in the symptoms of various diseases; then you would be more likely to know if you caught one. With a lot of diseases, catching it early will vastly increase your odds of surviving. Combine that with the fact that dying of an accident isn’t a common cause of death in the first place; and even if you were able to cut your risk of disease death by only one-fifteenth, that would be more than completely eliminating the risk of accidents.
Good question, Pseudonymous. I’m interested in the people that do buy insurance when it’s in their rational self-interest, and what’s different about them.
In the Bahamas the homicide rate is about 15 times greater than the suicide rate: http://en.wikipedia.org/wiki/List_of_countries_by_homicide_rate http://en.wikipedia.org/wiki/List_of_countries_by_suicide_rate
Some of this may stem from cultural reluctance to identify suicide as such… but I think the majority of it is simply the mark of a violent society.
BTW, I love the Bahamas, I spend 9 months a year sailing there. It may be a troubled paradise, but nonetheless it remains a paradise.
The bias probably results because risks that people have less control over (homicide) would be more important to remember than the ones that are primarily due to one’s own life decisions (suicide, health practices). The former risks seem unjust and avoidance practices still need to be learned as a means of living adaptively in an uncertain environment, vs. the latter risks, which already seem to be under our control.
You can properly use fiction as a shared language. Complex scenarios that would take long to explain can be referenced conveniently by way of a movie or book name. For example, two key classes of future, which are seriously discussed, are, one, the AIs subjugate us, and two, we enslave the AIs. These are not exhaustive but they are of particular interest to us, as is the more general topic of rights in a future with AIs, both human rights and AI rights. I have seen serious discussions of this, not based on movies. Science fiction, and fiction generally, responds to serious concerns, so whatever our concern, we can often find a fiction that we can use as a reference to help us efficiently convey our concern to someone else. Here the fiction is not being used as evidence but as a common language. Like that Star Trek episode in which a race communicates by talking about legends. “Darmok and Jalad at Tanagra.”
Lichtenstein et aliōrum research subjects were 1) college students and 2) members of a chapter of the League of Women Voters. Students thought that accidents are 1.62 times more likely than diseases, and league members thought they were 11.6 times more likely (geometric mean). Sadly, no standard deviation was given. The true value is 15.4. Note that only 57% and 79% of students and league members respectively got the direction right, which further biased the geometric average down.
There were some messed up answers. For example, students thought that tornadoes killed more people than asthma, when in fact asthma kills 20x more people than tornadoes. All accidents are about as likely as stomach cancer (well, 1.19x more likely), but they were judged to be 29 times more likely. Pairs like these represent a minority, and subjects were generally only bad at guessing which cause of death was more frequent when the ratio was less than 2:1. These are the graphs from the paper.
The following excerpt is from Judged Frequency Of Lethal Events by Lichtenstein, Slovic, Fischhoff, Layman and Combs.
There were more instructions about relative likelihoods and scales. And there was a glossary to help the people understand some categories.
Note that there was nothing about “old age” anywhere. There is no such thing as “death by old age,” but I’ll risk generalizing from my own example to say that some people think there is. And even those who know there isn’t might think, despite the instructions, “Oh, darnit, I forgot that old people count, too.”
I wish I’d tested myself BEFORE reading the correct answer. As near as I could tell, I would’ve been correct about homicide vs. suicide, but wrong about diseases vs. accidents (“Old people count, too!” facepalm). I wouldn’t even bother guessing the relative frequency. I didn’t have a clue.
When I need to know the number of square feet in an acre, or the world population it takes me seconds to get from the question to the answer. I dutifully spent ~20 minutes googling the CDC website, looking for this. It wasn’t even some heroic effort, but it’s not something I, or most other people, would casually expend on every question that starts with, “Huh, I wonder….” (we should, but we don’t).
As for what I found: I dare you, click on my link and see table 9. (http://www.cdc.gov/NCHS/data/nvsr/nvsr58/nvsr58_19.pdf). Did you? If you did, you would’ve seen that Zubon2 was right in this comment. Accidents win by quite a margin in the 15-44 demographic. I couldn’t find 1978 data, but I’d expect it to be similar (Lichtenstein’s et al tables are no help because they pool all age groups).
I spent the last two hours looking at these tables. Ask me anything! … I won’t be able to answer. Unless I have the CDC tables in front of me, I might not even do much better on Lichtenstein et aliōrum questionnaire than a typical subject (well, at least, I know tornadoes have frequency; measles doesn’t—I’ll get that question right). I suppose that people who haven’t looked at the CDC table are getting all of their information from fragmented reports like “Drive safely! Traffic accidents is the leading cause of death among teenagers who !” or “Buy our drug! is the leading cause of death in over 55!” or “5-star exhaust pipe crash safety rating!” Humans aren’t good at integrating these fragments.
Memory is a bad guide to probability estimates. But what’s the alternative? Should we carry tables around with us?
Personally, I hope that someday data that is already out there in the public domain will be made easily accessible. I hope that finding the relative frequencies of measles-related deaths and tornado-related deaths will be as quick as finding the number of square feet in an acre or the world population, and that political squabble will focus on whether or not certain data should be in the public domain (“You can’t force hospitals to put their data online! That violates the patients’ right to privacy!” “Well, but….”)
Note: repost from SEQ RERUN.
I might be 12 years late, but I am only now reading Rationality and taking the time to properly address these issues.
What I have really found to be missing is the reason why availability is, in most cases, a bias; why generalizing a limited set of personal-experiences and memories is statistically wrong.
That reason, of course, lays in the fact that the available examples we rely on when this heuristic comes into play does not form a valid statistical sample. Nor in sample size (we would need dozens, if not hundreds of examples to reach proper confidence level and intervals, where we usually rely on only few [<10].) and nor in sampling frame (our observations are highly subjective and do not equally cover all sub-populations; in fact, they most-likely only cover a very specific subset of the population that revolves around our neighborhood and social-group.)
Additionally, I found an action-plan to fighting this missing (both here and in “we change our minds less often than we think”.) My personal advice is to use our motivation to combat it in the following way: notice whenever we form a belief and ask ourselves: am I generalizing a limited-set of examples that come into mind from memories and past experiences? am I falling to the availability heuristic?
When you catch yourself, like I now do daily, rate how important the conclusion is, and if so—avoid reaching it through this heuristic (and choose deliberate, rational analysis instead.) If not, you may reach it using this generalization as long as you label that belief as non-trustworthy.
I believe that labeling your beliefs with trust-levels could be a very productive approach; when, in the future, you rely on a previous belief, you can incorporate the trust-level you have in that belief into play and consider if you may or may not trust it towards your current goal.
I would love to hear from you guys about all of this. For more, you can read what I’ve written in my Psychology OneNote notebook, in the page about this very bias.
Reminds me of the following quote that is attributed to J. Paul Getty:
“In times of rapid change, experience could be your worst enemy.”
″ While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases. ”
This is fascinating. Should we not be building dams? Could we say the same thing about fighting bushfires, since fighting them increases the amount of fuel they have available for next time?
Short answer: Yes. Forest fires are a natural and necessary part of forest development, and controlled burns are a long-standing indigenous practice. There are trees that will not start new generations without a fire; the seeds are dropped into the ashes, which let them crack open from heat, and they need the new sunlight access and nutrient access from the fire to get established. Fires also keep on top of pest populations and diseases, which can otherwise reach astronomical numbers and completely wipe populations. And if fires are frequent, each fire will stay small, as it will soon run into the area affected by the last fire where there is no fuel, and stop. The lack of fuel means they do not flicker high, and they do not run hot, so the inside and top of the larger trees remain fine. The contained area means they can be fled. So most mature trees and large animals will survive entirely.
The build-up of fuel due to fire suppression, on the other hand, leads to eventual extreme fires that are uncontainable, and can even wipe out trees previously considered immune to fire, such as sequoias, and reach speeds and sizes that become death traps for all animal life, as we saw in Australia.
Going back to indigenous fire management is all easier said than done, though; nowadays, human habitats often encroach so closely on wildlands that a forest fire would endanger human homes. And many forests are already so saturated with fuel that attempting a controlled burn can get out of hand.
But the fire management policies that got us to this point are one of many examples where trying to control a natural system and limit its destructive tendencies is more destructive in the long run, because the entire ecosystem is already adapted to destruction, and many aspects of it that seem untidy or inefficient or horrible at a glance end up serving another purpose.
E.g. You might think on the base of high underbrush promoting forest fires that we should cut down underbrush and remove dead trees from forests to limit fires; many humans thought that. This turned out to be a terrible idea, as this effectively devastated habitat for insects and small animals that burrow into or hide under dead wood, pulled nutrients from the ecosystem that was previously a closed circle, and removed fungi food sources, which in turn were crucial for tree networks that facilitate water trades during draughts and warnings from insect infestations, and removed perches on which animals could flee during floods. Historically, the healthiest forests were the ones we just left the fuck alone, and many interesting natural sites are the result of destruction, but then having humans pull out.
The current Chernobyl site is a startling illustration of this; humans fucked up the area, but then, we stopped messing with it, and it turned into a stable biodiversity hotspot despite the radiation; animals migrated there to flee humans, thrived and multiplied. We’ll have to see how it gets through the war.
We also have nature reserves in Germany that are former military testing sites, that essentially got exploded to bits. The resulting habitat (lots of open ground with holes and shards and sand) was incredibly interesting for reptiles and insects, who also profited immensely from the fact that humans did not enter the area out of fear of being blown up by remaining grenades. Realising that having it grow back into a forest would ruin it for these animals, we decided to release natural grazers on there, which are wild and which humans cannot interact with. We got some leftover grazers which zoos said were hopeless and not reproducing no matter what they tried, so they would not be too sad if they got blown up. They did not get blown up. They are doing great. They are reproducing. The fact that nature thrives in areas which humans contaminated radioactively or littered with explosives, simply because we stop going there and messing with it, is simultaneously hopeful and depressing to me.
Forests doing fine if just left alone might change with climate change though; assisted tree migration will likely be needed here, as trees to not migrate the distances fast enough naturally to keep up with the rapid changes. This is currently being extensively trialed in Europe.
Dams are also bad for other reasons, because they tend to wipe out the varied shallow water, shore and flooded and then drying marshland habitat that is so crucial for biodiversity, for the survival of many birds, amphibians and insects that are endangered, and even fish; young animals often hide in dense vegetation in shallow water to hide from predation and stay warm, and a deep river with standing, deoxygenating water or fast currents is a completely hostile habitat for them; they reduce migration options for animals and hence genetic diversity as populations become practically isolated from each other, they interrupt nutrient exchanges.
Which is very unfortunate, because they are one of the leading options we have for storing renewable energy for winter, which is a massive hurdle, and getting renewable energy in winter without the insect and bird deaths current wind energy causes.
Sorry for the long rant this late. I really care about wild lands. They are incredible systems.
Odd that your cautionary tale about humans accidentally ruining wilderness includes a story about humans successfully releasing animals into a new environment to keep it safe.
Not a new environment. These animals were native in this environment, and humans had hunted them to regional extinction. We first hunted the wolves to regional extinction, seeing them as evil predators eating our livestock. Then the grazers’ population exploded, and they ate all out food, so we hunted them to extinction. It turns out they had kept the forest at bay, and the whole ecosystem was wrecked, and we lost the reptiles and insects too. Bombing it ironically restored the lack of forest, and the insects and reptiles came back, but as the forest regrew, they were threatened again. And after that point, we basically just reversed our steps to how it had been before we messed with it. Put the grazers back, and a fence around. Monitored from a distance. Saw it had returned to a stable state. Stopped messing with it.
Allowing the large grazers and apex predators back is essential for rewilding. We had a project in the Netherlands where they decided to skip the wolves, and the necessary land for balance. The grazers massively multiplied, and then mass starved, and humans completely lost it.This is beginning to fix itself—the huge amounts of dead grazers seem to be attracting the wolves. They have crossed the border and are reestablishing. The whole return of wolves in Europe was unplanned, just a result of us having fixed the ecosystem so it could support them again, and them crossing back in from a reservoir in the East. But for many of these animals, they have been pushed incredibly far out of their original range, and in that scenario, assisted migration speeds things up a lot. Similar with trees.
Putting the original apex predators and original grazers back is very, very different from “hey, you know what Australia needs? Rabbits!”
And it is not so much a story about humans ruining nature in general. But about the fact that stable natural systems include destruction, and that what looks like optimising from a human’s standpoint often fucks the balance up. This is a valuable lesson to learn for bio-hacking, too.
The increased damage is due to building more on the flood plains, which brings economic gains. It is very possible that they outweigh the increased damage. Within standard economics, they should be, unless strongly subsidized insurance (or expectation of state help for the uninsured after a predictable disaster) is messing up the incentives. Then again, standard economics assumes rational agents, which is kind of the opposite of what is discussed in this post...
The straightforward way to force irrational homeowners/business owners/developers to internalize the risk would be compulsory but not subsidized insurance. That’s not politically feasible, I think. That’s why most governments would use some clunky and probably sub-optimal combination of regulation, subsidized insurance, and other policies (such as getting the same community to pay for part of the insurance subsidies through local taxes).
This is a knit-picking based on prescriptive linguistics, but:
“Subjects thought that [...] homicide was a more frequent cause of death than suicide [...]”
Homicide actually means “death of a person” so homicide IS more frequent cause of death than suicide, as suicide is a small subset of homicide, along with manslaughter and murder.
I don’t know where you got that definition from, but it disagrees with common usage and the dictionary. All of the “cides” are about murder, not death (patricide, regicide, suicide, etc), which could have been a clue, since “suicide” would be nonsense if this pattern held.
https://en.wiktionary.org/wiki/homicide
Maybe the number of people leaving on less than $1/day should be updated. 19% is not really close to reality anymore, luckily!
Our lack of proper preparedness and response for COVID-19 appears to be a prime example of absurdity bias in the modern day on a larger scale. I wonder if there exist cycles in history by which, after some number of generations, an event is deemed by the masses absurd even when from a bird’s-eye view its occurrence appears to be relatively consistent?