Sure.
xv15
It’s rather against the point of the article to start talking about the above examples of privileged questions…
Even so, it’s worth noting that immigration policy is a rare, important question with first-order welfare effects. Relaxing border fences creates a free lunch in the same way that donating to the Against Malaria Foundation creates a free lunch. It costs on the order of $7 million to save an additional American life, but on the order of $2500 to save a life if you’re willing to consider non-Americans.
By contrast, most of politics consists of policy debates with about as many supporters as opponents, suggesting there isn’t a huge welfare difference either way. What makes immigration and international charity special is the fact that the beneficiaries of the policies have no say in our political system. Thus the benefits that accrue to them are not weighted as heavily as our benefits, which means there’s a free lunch if overall welfare is what you care about.
“Fairness” depends entirely on what you condition on. Conditional on the hare being better at racing, you could say it’s fair that the hare wins. But why does the hare get to be better at racing in the first place?
Debates about what is and isn’t fair are best framed as debates over what to condition on, because that’s where most of the disagreement lies. (As is the case here, I suppose).
This is much better than my moral.
I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins.
The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.
“Alas”, said the mouse, “the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I must run into.”
“You only need to change your direction,” said the cat, and ate it up.
-Kafka, A Little Fable
Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:
Pyne: I guess your long hair makes you a girl.
Zappa: I guess your wooden leg makes you a table.
Of course this would imply that Pyne is not a featherless biped.
Source: Robert Cialdini’s Influence: The Psychology of Persuasion
I’ve always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.
That is true. But there are also such things as holding another person at gunpoint and ordering them to do something. It doesn’t make them the same person as you. Their preferences are different even if they seem to behave in your interest.
And in either case, you are technically not deciding the other person’s behavior. You are merely realigning their incentives. They still choose for themselves what is the best response to their situation. There is no muscle now-you can flex to directly make tomorrow-you lift his finger, even if you can concoct some scheme to make it optimal for him tomorrow.
In any case, commitment devices don’t threaten the underlying point because most of the time they aren’t available or cost-effective, which means there will still be many instances of behavior that are best described by non-exponential discounting.
We can’t jettison hyperbolic discounting if it actually describes the relationship between today-me and tomorrow-me’s preferences. If today-me and tomorrow-me do have different preferences, there is nothing in the theory to say which one is “right.” They simply disagree. Yet each may be well-modeled as a rational agent.
The default fact of the universe is that you aren’t the same agent today as tomorrow. An “agent” is a single entity with one set of preferences who makes unified decisions for himself, but today-you can’t make decisions for tomorrow-you any more than today-you can make decisions for today-me. Even if today-you seems to “make” a decision for tomorrow-you, tomorrow-you can just do something else. When it comes down to it, today-you isn’t the one pulling the trigger tomorrow. It may turn out that you are (approximately) an individual with consistent preferences over time, in which case it’s equivalent to today-you being able to make decisions for tomorrow-you, but if so that would be a very special case.
There are evolutionary pressures that encourage agency and exponential discounting in particular. I have also seen models that tried to generate some evolutionary reason for time inconsistency, but never convincingly. I suspect that really, it’s just plain hard to get all the different instances of a person to behave as a single agent across time, because that’s fundamentally not what people are.
The idea that you are a single agent over time is an illusion supported by inherited memories and altruistic feelings towards your future selves. If you all happen to agree on which one of you should get to eat the donut, I will be surprised.
Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: the doctor’s initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) the false positive and false negative rates of a test.
The program would spit out the probability of having the disorder given positive and negative test results.
Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold to doctors if the interface were designed specifically for them. I could see a smart person in charge of a hospital telling all the doctors at the hospital to incorporate such a program into their diagnostic procedure.
Failing this, another possibility is to solicit the relevant information from the doctor and then do the math yourself. (Being sure to get the doctor’s prior before any test results are in). Not every doctor would be cooperative...but come to think of it, refusal to give you a number is a good sign that maybe you shouldn’t trust that particular doctor anyway.
thanks, PPV is exactly what I’m after.
The alternative to giving a doctor positive & negative predictive values for each maternal age is to give false positive & negative rates for the test plus the prevalence rate for each maternal age. Not much difference in terms of the information load.
One concern I didn’t consider before is that many doctors would probably resist reporting PPV’s to their patients because they are currently recommending tests that, if they actually admitted the PPV’s, would look ridiculous! (e.g. breast cancer screening).
“False positive rate” and “False negative rate” have strict definitions and presumably it is standard to report these numbers as an outcome of clinical trials. Could we similarly define a rigid term to describe the probability of having a disorder given a positive test result, and require that to be reported right along with false positive rates?
Seems worth an honest try, though it might be too hard to define it in such a way as to forestall weaseling.
Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test
Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn’t know how to combine these into the probability of Down syndrome given a positive test result.
Okay, so to the extent that it’s possible, why doesn’t someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, “Here is the probability that a patient with Down syndrome will have a negative test result,” why not just directly say, “When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome.”
Bayes theorem is a general tool that would let doctors manipulate the information they’re given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their need for Bayes theorem by simply giving them different (not necessarily much more) information?
There are counterpoints to consider. But it seems to me that many examples of Bayesian failure in medicine are analogously simple to the above, and could be as simply fixed. The statistical illiteracy of doctors can be offset so long as there are statistically literate people upstream.
Closeness in the experiment was reasonably literal but may also be interpreted in terms of identification with the torturer. If the church is doing the torturing then the especially religious may be more likely to think the tortured are guilty. If the state is doing the torturing then the especially patriotic (close to their country) may be more likely to think that the tortured/killed/jailed/abused are guilty. That part is fairly obvious but note the second less obvious implication–the worse the victim is treated the more the religious/patriotic will believe the victim is guilty. … Research in moral reasoning is important because understanding why good people do evil things is more important than understanding why evil people do evil things.
I dislike this quote because it obscures the true nature of the dilemma, namely the tension between individual and collective action. Being “not in one’s right mind” is a red herring in this context. Each individual action can be perfectly sensible for the individual, while still leading to a socially terrible outcome.
The real problem is not that some genius invents nuclear weapons and then idiotically decides to incite global nuclear war, “shooting from the hip” to his own detriment. The real problem is that incentives can be aligned so that it is in everyone’s interest every step along the way, to do their part in their own ultimate destruction.
Of course, if “right mind” was defined to mean “socially optimal mind,” fine, we aren’t in our right mind. But I don’t think that’s the default interpretation.
This post, by its contents and tone, seems to really emphasize the downside of signaling. So let me play the other side.
Enabling signaling can add or subtract a huge amount of value from what would happen without signaling. You can tweak your initial example to get a “rat race” outcome where everyone, including the stupid people, sends a costly signal that ends up being completely uninformative (since everyone sends it). But you can also make it prohibitively mentally painful for stupid people to go to college, versus neutral or even enjoyable for smart people (instead of there being an actual economic cost of engaging in signaling), with a huge gain to employers for being able to tell them apart.
one can look at Nikolai Roussanov’s study on how the dynamics of signaling games in US minority communities encourage conspicuous consumption and prevent members of those communities from investing in education and other important goods.
As a counterpoint to this, in other cases the signaling value of education may induce people to get more education than is individually optimal, which is actually a good thing socially if you think education has large positive externalities. And if you work hard and discover a cure for cancer, you will be paid largely through other people’s opinions of you, now that you’ve signaled to them that you are such an intelligent and hard-working and socially-conscious person. (You were just as intelligent before you cured it, but now they know). Since you cannot possibly hope to recoup even a modest fraction of the social value you will have created, that’s unambiguously good for incentives.
On any other site, I would probably get away with saying: Since invention is basically the reason for our high modern standards of living, if signaling seriously encourages it, then in the long run the positive value of signaling would seem to dwarf any losses discussed above (even the “poverty” of some minority communities is nothing compared to the poverty in all of our shared historical past). But here...well, here we are pretty worried about where our invention spree might be leading us.
This sounds awesome. It would be really cool if you could configure it so that identifying biases actually helps you to win by some tangible measure. For example, if figuring out a bias just meant that person stopped playing with bias (instead of drawing a new bias), figuring out biases would be instrumental in winning. The parameters could be tweaked of course (if people typically figure out the biases quickly, you could make it so they redraw biases several times). Or you could link drawing additional biases with the drawing of epidemic cards?
I have this terrifying vision of a version where it is biases—not diseases—which spread throughout the world, and whenever a player’s piece is in a city infected with a certain bias, they have to play with it...
Luke, I thought this was a good post for the following reasons.
(1) Not everything needs to be an argument to persuade. Sometimes it’s useful to invest your limited resources in better illuminating your position instead of illuminating how we ought to arrive at your position. Many LWers already respect your opinions, and it’s sometimes useful to simply know what they are.
The charitable reading of this post is not that it’s an attempted argument via cherry-picked examples that support your feeling of hopefulness. Instead I read it as an attempt to communicate your level of hopefulness accurately to people who you largely expect to be less hopeful. This is an imprecise business that necessarily involves some emotional language, but ultimately I think you are just saying: do not privilege induction with such confidence, we live in a time of change.
It might quell a whole class of complaints if you said something like that in the post. Perhaps you feel you’ve noticed a lot of things that made you question and revise your prior confidence about the unchangingness of the world...if so, why not tell us explicitly?
(2) I also see this post as a step in the direction of your stated goal to spend time writing well. It seems like something you spent time writing (at least relative to the amount of content it contains). Quite apart from the content it contains, it is a big step in the direction of eloquence. LWers are programmed to notice/become alarmed when eloquence is being used to build up a shallow argument, but it’s the same sort of writing whether your argument is shallow or deep. This style of writing will do you a great service when it is attached to a much deeper argument. So at the least it’s good practice, and evidence that you should stick with your goal.
Anonymous; quoted for instance in The Manager’s Dilemma