Some thought experiments follow this template:
We have a moral intuition
We make some computation to what this intuition implies
We check how we feel about this implication, and it feels counter-intuitive
Then some people bite the (3) bullet. But bullets sometimes (always?) have a counter-bullet.
You can reverse those thought experiments: take ~(3) as your starting moral intuition, and then derive ~(1) which will be counter-intuitive.
For example, you can start with:
I would care about saving a drowning person even if it came at the cost of ruining my suit
There are a lot of metaphorically drowning people in the world
Therefore I should donate all my money to effective poverty alleviation charities
This is called “shut up and multiply”.
But you can also use the reverse:
I don’t want to donate all my money to effective poverty alleviation charities
A drowning person would cost more to save because it would ruin my suit
Therefore I shouldn’t save a drowning person
This is called “shut up and divide” (also related: Boredom vs. Scope Insensitivity).
Step (2) might be eliminating a relevant feature which generates the counter-intuition, or it might be a way to open our eyes to something we were not seeing. And maybe for some thought experiment you find both the assumption and conclusion intuitive or counterintuitive. But that’s not the object of this post.
Here I’m just interested in seeing what the reverse of ethical thought experiments look like. I’ll put more examples as answer. I would like to know which other ethical thought experiments have this pattern—that is, an ethical thought experiment that starts with an intuition to derive a counter-intuition, which can be reversed, to instead derive that the initial assumption is the wrong one.
Update: As I’m writing some of them, I realized some ethical thought experiment are presented as a clash of intuitions (so the “reverse” is part of the original presentation), whereas others seem to be trying to persuade the reader to bite the bullet on a certain counter-intuition, and omit to mention the reverse ethical thought experiment.
The violinist
Original:
We should save the violinist
Fetuses are like violinists
Therefore we should save fetuses
Reverse:
We don’t care about fetuses
Violinists are like fetuses
Therefore we don’t care about violinists (metaphorically)
Isn’t the answer just “all of them”? The contrapositive of an implication is always true.
If (if X then Y) then (if ~Y then ~X). Any intuitive dissonance between X and Y is preserved by negating them into ~X and ~Y.
Yeah that makes sense
Many of these calculations get more consistent if you bite just one fairly large bullet: sub-linear scaling (I generally go with logarithmic) of value. Saving a marginal person at the cost of ruining a marginal suit is a value comparison, and the value of both people and suits can vary pretty widely based on context.
The hardest part of this acceptance is that human lives are not infinite nor incomparable in value. I also recommend accepting that value is personal and relative (each agent has a different utility function, with different coefficients for the value of categories and individual others), but that may not be fully necessary to resolve the simple examples you’ve given so far.
Infanticide
Original:
We don’t care about killing a baby before birth
A baby 1 minutes after birth is almost the same as a baby 1 minute before birth
Therefore we don’t care about killing a 1 minute-old baby
Reversed:
We care about killing a 1 minute-old baby
A baby 1 minutes after birth is almost the same as a baby 1 minute before birth
Therefore we care about killing a baby before birth
Isn’t the original argument here just the Sorites “paradox”?
We don’t care about killing a single fertilized human cell
A human of any age is almost the same as a human of that age minus one minute
Therefore we don’t care about killing a human of any age
This proves too much. No ethical system I’m familiar with holds that because (physical) things change gradually over time, no moral rule can distinguish two things.
Ah, I actually had just came up with that one (am now realizing “original” wasn’t the right word for this one) -- thanks for bringing up this “paradox”!
The Non-identity problem
Original:
We only care about things if they are bad/good for someone
Using a lot of resources isn’t bad for people in the future, it just changes who lives in the future
Therefore we don’t mind that people in the future are having a bad time because of our consumption
Reversed:
We care that people in the future are having a bad time because of our consumption
Consuming isn’t bad for specific people in the future, it just changes who lives in the future
Therefore we don’t only care about thing if they are bad/good* for someone, but also about what kind of lives we bring into existence
Dust specs vs torture
I feel like this one was presented as a clash of 2 intuitions, so both the “reversed” is also in the original presentation.
Original:
We prefer X people experiencing Y pain than 1,000 people experiencing 2*Y pain AND this preference is true for all X, Y element of the real numbers
This can be chained together multiple times
We prefer 1 person experiencing 50 years of torture to a googolplex people having specs of dust in their eyes
Reversed:
We prefer a googolplex people having specs of dust in their eyes to 1 person experiencing 50 years of torture
There’s some threshold of pain for which we care lexically more about
We can more about 1 person experiencing slightly more pain than this threshold than a large number of people experiencing slightly less pain than this threshold
keyword to search: lexical threshold negative hedonistic utilitarianism
The original 1 seems pretty clearly false here if X >> 1000 for basically any value of Y.
Woops, I meant 1,000*X
And Y/2 pain, probably? (Or the conclusion doesn’t follow.)
Ahhh, yep, thanks
Oops, right!
Experience machine
Original:
We only care about our happiness
An hypothetical happiness machine could bring us the most happiness
Therefore we want to live in happiness machine
Reversed:
We don’t want to live in an happiness machine
An happiness machine only brings us happiness
Therefore we care about other things than happiness
Trolley problem / transplant
Original:
We want to take actions to save more people
Survival lotteries save more people just like pulling the lever does
Therefore we support survival lotteries
Reversed:
We don’t support survival lotteries
Pushing the lever is an action that changes who dies just like the survival lotteries does
Therefore we don’t support pulling the lever
Could do the same with pulling a lever vs pushing a person
Utility monster
Original:
We care about increasing happiness
If there was a being that had by far the highest capacity for happiness, they might be the best way to increase happiness even at the cost of everyone else
We care about utility monsters the most (which violates the egalitarian intuition)
Reversed:
We care about each beings equally
If there was a being that had by far the highest capacity for happiness, we still wouldn’t give them more resources
We don’t care about increasing total happiness