Suppose you have an insight about Google. The efficient market hypothesis says you can’t make a profit. Your insight is not a new insight. The market has already priced it in. You know no more about Google’s future price than you did before.
That’s the bad news. That’s also the good news: If you didn’t have that insight , you wouldn’t know any less about Google’s future price. Efficient market!
I call this zeroing out. Your ignorance is not punished.
This means any unique knowledge you do find will be rewarded. If you get good Google news first, you don’t need to know anything else. Buy, buy, buy!
It is much, much easier to pick out a way in which a system is sub-optimal, than it is to implement or run that system at anything like its current level of optimization.
A corollary of this:
If is much, much easier to find a bias in a complex model, than it is to build an equally good model.
“I built a better model of Google’s future profits than the market” is a hard sell. “I found a factor the market isn’t properly accounting for” or “I got this news and figured out what it means before the market could price it in” is still a tough sell – it’s Google – but it’s less impossible.
Taking difficulty down, consider odds on the winner of a football game. Both tasks are now realistic. You would still have a much easier task figuring out “Alabama wins more often than people think” than “Alabama is 95.3% to win.”
The easiest way to beat consensus opinion is to use consensus opinion as an input.
It’s easy to beat the wisdom of the crowd a little – average your opinion into the crowd.
II Hiring
Hiring an engineer is trickier. Assume a consensus interview process.
Satvik suggests focusing on finding candidates like Bob, who does poorly on interviews but is an excellent employee, rather than candidates like Alex, who is an excellent employee and also an excellent interview. Alex likely won’t take the job, but Bob will. Alternatively, demand for Alex is higher, so Alex’s market price is higher than Bob’s. Optimize for finding Bob.
The problem is Charlie. Think this guy. Charlie interviews badly and he’d be a terrible employee. You do not want to hire Charlie, and he’ll take any job he can get. The consensus interview process correctly exposes Charlie’s awfulness. Your process failing to reject Charlie would be very bad, and most applicants are Charlies.
If you started with a standard ‘consensus interview process score’ from a central interviewing firm, you could look for signs of bias in that process without understanding the process. You don’t have that.
If a flaw is caught by the consensus process, and you don’t check for it, you’ll end up with a lot of people with that flaw. Sometimes that’s good – the flaw is quirky and meaningless – but usually this is bad. By checking for such flaws, the consensus forces you to check for them, even if they are rare. If everyone else tests for cocaine addiction, and you don’t check, you’ll hire a lot of cocaine addicts.
You get favorable selection for Bobs, but adverse selection for Charlies.
So you ask coding questions, check references and so on. You redo all the work to regain the missing signal from the consensus. You also need to identify the kind of ‘bad at interviewing’ or other feature that the consensus punishes more than it should.
You have to build the entire model of Bob to know he’s Bob and not Charlie. Much harder.
The ‘score of 0’ from using the consensus method starts to look good. You can do better, but doing even that well is hard!
III Buying Cereal
You decide to buy Cheerios. There are three box sizes.
If the boxes are priced ‘correctly’, you know that the small box costs more per ounce than the medium box, which costs more per ounce than the big box.
Knowing the price is ‘correct’ simplifies things greatly. If you currently have a high value on a small box – maybe you have little storage space, or are trying a new brand – you can buy a small box. If you can properly use a bigger box, you can buy a bigger box and save. No math required.
Then one day producers realized we were relying on such heuristics, so they started using mixed strategies. Usually they would price the boxes naturally, but sometimes the bigger box would be more expensive per ounce, or the small box would cost almost as much as the big box.
In the first case, we could rely on bias – which solution worked relatively better for us? In the second case, we have to build a value model complete with numbers. Much harder.
IV One to Zero
Zeroing out is a wonderful thing. Where you have measures and signals that are sufficiently credible, you can abstract all aspects of a problem away except the ones you care about.
In the Google example, we can zero out everything at the start, allowing us to safely bring in genuine new information (if we somehow had that), or to decide whether we want to own shares in a company like Google.
In the cereal example, we could zero out everything except box size versus price per ounce. Products were identical. We prefer to zero that out too, but found we couldn’t safely do that.
In the hiring example, our goal is to zero out employee quality so that we can isolate and model interviewing skill. What we actually care about is interviewing skill given employee quality. We must establish the virtual market price for the employee’s quality first. Then we look at skill.
This illustrates the danger of adverse selection when modeling prices. We can think of interviewing as generating a fair price for the new employee, and comparing to market price. If Charlie is a cocaine addict, and you don’t know that but the market does, you’ll reliably overpay for lots of Charlies.
Attempting to form a model from scratch creates a burden of omniscience. You have to throw out all the social information. Every mistake you make and everything you overlook punishes you. Combined with negative selection this leads to many costly and embarrassing errors. Thus it is often right to abstract away as much as you can, or copy the consensus view or method in most aspects, to focus where you can improve. Often one must fully model the existing system, even its mistakes, to understand what its outputs mean – if you know that something costs $50 but ‘should’ cost $75, you’ll know something is up, and shouldn’t decide how much you value the product until you’ve found that missing information.
Zeroing Out
Link post
Related to (Eliezer Yudkowski at Less Wrong): An Equilibrium of No Free Energy
Related to (Satvik Beri at Less Wrong): Competitive Truth Seeking
Follow-up to: Leaders of Men
I: Markets
Suppose you have an insight about Google. The efficient market hypothesis says you can’t make a profit. Your insight is not a new insight. The market has already priced it in. You know no more about Google’s future price than you did before.
That’s the bad news. That’s also the good news: If you didn’t have that insight , you wouldn’t know any less about Google’s future price. Efficient market!
I call this zeroing out. Your ignorance is not punished.
This means any unique knowledge you do find will be rewarded. If you get good Google news first, you don’t need to know anything else. Buy, buy, buy!
From Leaders of Men:
A corollary of this:
If is much, much easier to find a bias in a complex model, than it is to build an equally good model.
“I built a better model of Google’s future profits than the market” is a hard sell. “I found a factor the market isn’t properly accounting for” or “I got this news and figured out what it means before the market could price it in” is still a tough sell – it’s Google – but it’s less impossible.
Taking difficulty down, consider odds on the winner of a football game. Both tasks are now realistic. You would still have a much easier task figuring out “Alabama wins more often than people think” than “Alabama is 95.3% to win.”
The easiest way to beat consensus opinion is to use consensus opinion as an input.
It’s easy to beat the wisdom of the crowd a little – average your opinion into the crowd.
II Hiring
Hiring an engineer is trickier. Assume a consensus interview process.
Satvik suggests focusing on finding candidates like Bob, who does poorly on interviews but is an excellent employee, rather than candidates like Alex, who is an excellent employee and also an excellent interview. Alex likely won’t take the job, but Bob will. Alternatively, demand for Alex is higher, so Alex’s market price is higher than Bob’s. Optimize for finding Bob.
The problem is Charlie. Think this guy. Charlie interviews badly and he’d be a terrible employee. You do not want to hire Charlie, and he’ll take any job he can get. The consensus interview process correctly exposes Charlie’s awfulness. Your process failing to reject Charlie would be very bad, and most applicants are Charlies.
If you started with a standard ‘consensus interview process score’ from a central interviewing firm, you could look for signs of bias in that process without understanding the process. You don’t have that.
If a flaw is caught by the consensus process, and you don’t check for it, you’ll end up with a lot of people with that flaw. Sometimes that’s good – the flaw is quirky and meaningless – but usually this is bad. By checking for such flaws, the consensus forces you to check for them, even if they are rare. If everyone else tests for cocaine addiction, and you don’t check, you’ll hire a lot of cocaine addicts.
You get favorable selection for Bobs, but adverse selection for Charlies.
So you ask coding questions, check references and so on. You redo all the work to regain the missing signal from the consensus. You also need to identify the kind of ‘bad at interviewing’ or other feature that the consensus punishes more than it should.
You have to build the entire model of Bob to know he’s Bob and not Charlie. Much harder.
The ‘score of 0’ from using the consensus method starts to look good. You can do better, but doing even that well is hard!
III Buying Cereal
You decide to buy Cheerios. There are three box sizes.
If the boxes are priced ‘correctly’, you know that the small box costs more per ounce than the medium box, which costs more per ounce than the big box.
Knowing the price is ‘correct’ simplifies things greatly. If you currently have a high value on a small box – maybe you have little storage space, or are trying a new brand – you can buy a small box. If you can properly use a bigger box, you can buy a bigger box and save. No math required.
Then one day producers realized we were relying on such heuristics, so they started using mixed strategies. Usually they would price the boxes naturally, but sometimes the bigger box would be more expensive per ounce, or the small box would cost almost as much as the big box.
In the first case, we could rely on bias – which solution worked relatively better for us? In the second case, we have to build a value model complete with numbers. Much harder.
IV One to Zero
Zeroing out is a wonderful thing. Where you have measures and signals that are sufficiently credible, you can abstract all aspects of a problem away except the ones you care about.
In the Google example, we can zero out everything at the start, allowing us to safely bring in genuine new information (if we somehow had that), or to decide whether we want to own shares in a company like Google.
In the cereal example, we could zero out everything except box size versus price per ounce. Products were identical. We prefer to zero that out too, but found we couldn’t safely do that.
In the hiring example, our goal is to zero out employee quality so that we can isolate and model interviewing skill. What we actually care about is interviewing skill given employee quality. We must establish the virtual market price for the employee’s quality first. Then we look at skill.
This illustrates the danger of adverse selection when modeling prices. We can think of interviewing as generating a fair price for the new employee, and comparing to market price. If Charlie is a cocaine addict, and you don’t know that but the market does, you’ll reliably overpay for lots of Charlies.
Attempting to form a model from scratch creates a burden of omniscience. You have to throw out all the social information. Every mistake you make and everything you overlook punishes you. Combined with negative selection this leads to many costly and embarrassing errors. Thus it is often right to abstract away as much as you can, or copy the consensus view or method in most aspects, to focus where you can improve. Often one must fully model the existing system, even its mistakes, to understand what its outputs mean – if you know that something costs $50 but ‘should’ cost $75, you’ll know something is up, and shouldn’t decide how much you value the product until you’ve found that missing information.