Hi, I write and find exercises on biases to help myself and others think better.
For example:
Bob is an opera fan who enjoys touring art museums when on holiday.
Growing up, he enjoyed playing chess with family members and friends.
Which situation is more likely?
Bob plays trumpet for a major symphony orchestra.
Bob is a farmer.
My question to the LessWrong community:
Does it make sense to learn like this?
Answer to the example (and other exercises):
https://newsletter.decisionschool.org/p/decision-making-bias-base-rate-fallacy
The question you have here looks to me to be underspecific as it doesn’t say anything about the selection algorithm but speaks about abstract people. I’m not sure that it’s meaningful to speak about likelihoods if there’s no selection algorithm.
The general research on intelligence improvement suggests that most exercises targeted at intelligence improvement don’t improve general intelligence and at best a narrow subskill.
In this case the goal of the exercise seems to be about teaching the concept of the base rate fallacy.
_________________
Let’s look at example 2:
You suggest that people should look at the general base rate for divorce. That’s stupid given that information for your demographic is available.
Whether or not to make a prenup is a complex decision. The argument that divorces without a prenup are harder doesn’t automatically indicate that prenups are good. If someone puts their commitments in beeminder, breaking them becomes more costly. That’s the point of making the commitment in beeminder. The same way marriage is also a commitment device. It’s plausible that weakening it’s power as a commitment device is worthwhile but it’s a complex issue.
_________________
If you look at the example of cancer and faith healing, I think the way it proposes to reason about that example is bad as it’s basically about appeal to authority.
Given the policy discussions of the last decade it’s quite ironic to use the American Cancer Society here as an authority because it advocates treatments that don’t seem to increase survival over the base-rate. The US has historically a higher rate of diagnosing people with cancer, a higher cancer survival rate and the same amount of cancer deaths as comparable countries.
This lead the Obama administration to decide to reduce the amount of people getting diagnosed with cancer by reducing testing and the American Cancer Society was against that.
I think state of the art rationality to that question would be:
Taboo the phrase “uncurable disease”. It’s bad ontology that’s confuses the nature of the disease with the nature of the treatments we have for it. The phrase suggest a certainty about the nature of disease that just doesn’t exist in the real world and parts of Western medicine are responsible for that.
Instead of thinking in terms of “statistical certainty” think in terms of uncertainty. The world is very uncertain. When it comes to amputees we don’t assume that there’s a “statistical certainty” that some amputees will go their limbs back.
The paradigma of cancer of two decades ago was flawed and point to examples of people who survived cancer was an argument against that paradigma. A lot more cancer diagnoses resolve themselves without treatment then the American Cancer Society wants to admit.
The faith healing argument is one about the gods in the gaps. There are gaps in the model of how cancer develops (and especially the model of two decades ago).
After accepting those gaps the question is whether there’s reason to believe that faith healing is responsible for some of those gaps. The faith healer in question didn’t point us to a controlled study to read as evidence. They also didn’t point us a gears model of why we should believe that faith healing works.
That’s the difference between them and the American Cancer Society. The American Cancer Society has gears models for their treatment recommendation and some controlled studies to back them up (controlled studies where you don’t treat people with cancer are however hard to get past ethical review boards and thus the evidence we have isn’t that great).
_____________________
After looking at the content of the question, what’s the underlying problem: “The reader isn’t asked to make a real decision, they are just expected to go “boo faith-healers; yay authority”. Given that’s the general current, I wouldn’t expect the reader to learn anything besides “boo outgroup, yay mainstream authority”.
If you want to effect real decision making you should have examples that aren’t just about rounding down to stereotypes. Examples should confront the reader not in a way where simply yielding to stereotypes provides the right answer.
The lesson should be that reality is complex and not that it’s easily solved with stereotypes.
Thank you for the feedback and the interesting story about the American Cancer Society!
Do you have a blog or place where you write about that type of “background” information/history?
I still have a lot to learn and you are right not everything is black and white, reality is complex.
At the start, you mentioned “selection algorithm”, could you expand on that?
Thanks!
One example of a selection algorithm would be: “You go to a bar in Austin, you are talking with a person and you learn that for the person X is true. Is it more likely that A or B”
This setup allows me to picture an event happening in the real world and events have likelihoods.
While the ambiguity is unlikely to lead to a misunderstanding in this case, there are plenty of decision theoretic problems where it matters. When creating practice exercises you want them to be specific and without ambiguity.
I have now write up the story of cancer and have a draft, I’ll share it with you.