if management are doing that then are neglecting a powerful tool in their tool-kit, because announcing a G will surely cause G to fall, and experience says that to begin with a well-chosen G and G remain correlated (because many of the things to do to reduce G also reduce G). It is only over time that G* and G detach.
botogol
At work a large part of my job involves choosing G , and I can report that Goodhart’s Law is very powerful and readily observable.
Further : rational players in the workspace know full-well that management desire G, and the G is not well-correlated with G, but nonethelss if they are rewarded on G*, then that’s what they will focus on.The best solution—in my experience—is mentioned in the post: the balanced scorecard. Define several measures G1 G2 G3 and G4 that are normally correlated with G. The correlation is then more persistent : if all four measures improve it is likely that G will improve.
G1 G2 G3 G4 may be presented as simulaneous measures, or if setting four measures in one go is too confusing for people trying to prioritise (the frwer the measures the more powerful) they can be sequential. IE If you hope to improve G over 2 years, then measure G1 for two quarters, then switch the measurement to G2 for the next two and so on. (obviously you don’t tell people in advance). NB this approach can eb effective, but will make you very unpopular.
That’s true (that they have biases) although I understand the training is attend to the nature of the injury, and practicalities of the situation—eg danger to the firefighter—rather than the age of the victim.
However what one might expect to see in firefighters would be ethical dilemmas like the trolley problem to trigger the cerebral cortex more, and the amaglydia less than in other people.
Perhaps.
Unless of course the training works by manipulating the emotional response. So firefighters are just as emotional, but their emotions have been changed by their training.
This is the sort of problem Kahane was talking about when he said it is very difficult to interpret brain scans.
A person in the audience suggested taking firefighters, who sometimes face dilemmas very like this (Do I try to save life-threatened person A or seriosly injured Baby B), and hooking them up to scans and seeing if their brains work differently—The hypothesis being that they would make decision in dilemmas more ‘rationally’ and less ‘emotionally’, as a result of their experience and training. Or the pre-disposition that led to them becoming fire-fighters in the first place.
The opening was deliberate—it’s a common way that newspaper Diarists start their entries.… but perhaps it’s a common way that British newspaper diarists start their entries, and sounds wrong to american ears. So I have changed it. Nations divided by a common language etc.
Yes. People get bogged down with the practical difficulties. Another common one is whether you have the strength to throw the stranger off the bridge (might he resist your assault and and even throw you off).
I think the problem is the phrasing of the question. People ask ‘would you push the fat man’, but they should ask ‘SHOULD you push the fat man’. A thought experiemnt is like an opinion poll, the phrasing of the question has a large impact on the answers given. Another reason to be suspicious of them.
No, I wasn’t declaring it meaningless.
My (perhaps trivial) points were that all hypothetical thought experiments are necessarily conducted in Far mode, even when thought experiment is about simulating Near modes of thinking. Does that undermine it a little?
And
while all Thought Experiments are Far
Actual Experiements are Near.
I was illustrating that with what I hoped was an amusing anecdote—the bizarre experience I had last week of having the trolley problem discussed with the fat man actually personified and present in the room, sitting next to me, and how that nudged the thought experiment into something just slightly closer to a real experiment.
It’s easy to talk about sacrificing one person’s life to save five others, but hurting his feelings by appearing to be rude or unkind, in order to to get to a logical truth was harder. This is somewhat relevant to the subject of the talk—decisions may be made emotionally and then rationalised afterwards.
Look, I wasn’t hoping to provoke one of Eliezer’s ‘clicks’, just to raise a weekend smile and to discuss scenario where lesswrong readers had no cached thought to fall back on.
:-( no, not a draft! It was just supposed to be light-hearted—fun even—and to make a small point along the way.… it’s shame if lesswrong article must be earnest and deep.
Far & Near / Runaway Trolleys / The Proximity Of (Fat) Strangers
no, not at all, I don’t think rational = unemotional (and I liked EY’s article explaining how it is perfectly rational to feel sad … when something sad happens).
But rationality does seem to be stongly associated with a constant meta-analytical process: always thinking about a decision, then thinking about the way we were thinking about the decision, and then thinking about the self-imposed axioms we have used to model the way that we were thinking about the meta-thinking, and some angst about whether there are undetected biases in the way that .. yada yada yada.
which is all great stuff,
but I wondered whether rationalists are like that all the time, or whether they ever come home late, open a beer or two and pick their nose while transfixed by czechoslvakian wrestling on ESPN, without stopping to wonder why they are doing it, and wouldn’t it be more rational to go to bed already.
Do you act all rational at home . . or do you switch out of work mode and stuff pizza and beer in front of the TV like any normal akrasic person? (and if you do act all rational, what do your partner/family/housemates make of it? do any of them ever give you a slap upside the head?)
:-)
Can you make a living out of this rationality / SI / FAI stuff . . . or do you have to be independently wealthy?
I have been in Ashley’s situation—roped in to play a similar parlour game to demonstrate game theory in action.
In my case it was in a work setting: part of a two day brainstorming / team building boondongle.
In my game there were five tables each with eight people, all playing the same, iterarted game.
In four out of five table every single person cooperated in every single iteration—including the first and last one. On the fifth table they got confused about the rules.
The reason for the behaviour was clear—the purpose of the game was to demonstrate that cooperation increased the total size of the pot (the game was structered that way). In a workplace setting the prize was to win the approbation of the trainers and managers, by demonstrating that we were teamplayers, and certainly NOT to be the asshole who cheated his tablemates and walked off with $50.
On the the fifth table they managed to confuse themselves such that on the first iteration two of them unwittingly defected. Their table therefore ended up with the least money, but the two individuals of course ended up the richest in the room—they were hideously embarrassed.
I was left wondering what amount of money it would have taken to change behaviour. Would people defect if there was $1000 at stake? In that setting, I think still not. $10,000? $100,000 ?
Practical game-theory experiments would be quite expensive to run, I think.
“Do not be too moral. You may cheat yourself out of much life. Aim above morality. Be not simply good; be good for something” Thoreau
What a lot of comments (and I was worried that it was all too trivial. Lesson: never underestimate the power of Dr Who) Thanks all.
@Nanani—yes, indeed, the initial round up of 600 or so was composed of waifs and strays like that, inc the ill. But when the demand of 10% was acceded to there wasn’t time to handpick
@SharedPhoenix—I agree and a strength of this story was that was no easy way out. The scenario was played out right to the end with the main character forced to make a rational sacrifice. OK, he found a way for it to be jsut one child, but there was still a choice.
@mikem—I disagree. Yes there were selfish cabinet members simply looking out for their own (this was dealt with in several contexts—there was an assumption that the interests of one’s own child is beyond the limit of human rationality) however the decision to accede to this, and actually make this a policy was taken by the prime minister for rational reasons. He recognised that unless he spared the children of the decision makers and enforcers, there would be no decisions and no enforcing. It was purely rational. (And ‘units’ yes I meant that it was a plausible that such a sinister euphemism would be employed)
@jwdink—yes, I was surprised they took that route (the rational give-in rather than fight to death) in TV-Land it was an unusual decision. That’s why I wrote the post about it :-)
The Trolley Problem in popular culture: Torchwood Series 3
perhaps an arm-wrestling contest would be acceptable… hmm, but not possible on bloggingheadstv… a face-pulling contest?
what i’d actually like to see would be Robin Hanson v Mencius Moldbug
William Gibson? http://www.williamgibsonbooks.com/index.asp
He also thinks a lot—and cleverly—about the future but in a different way from Eliezer.
relevant article in the New Yorker http://www.newyorker.com/reporting/2007/08/20/070820fa_fact_page?currentPage=all