[1] You can run things in these ways. I have seen it. It helps. A lot.
[2] We must treat creating additional managerial levels as having very high costs. This is not an action to be taken lightly. Wherever possible, create distinct organizations and allow them to interact. Even better, allow people to interact as individuals.
Expanding on what these are like might be useful (as that isn’t the default*), and it might help people if they know what that looks like/have an example to work from. (This is intended as an indication of interest, not criticism.)
Long:
I do not intend to defend them further unless I see an opportunity in doing so.
opportunity = benefit?
Doing anything with an intent to deceive, or an intent to game your metrics at the expense of object level results, needs to be an automatic “you’re fired.”
How can object level results be used as metrics, instead of proxies? (And how can ‘metrics being gamed’ be measured, particularly automatically?)
If you want to ‘disrupt’ an area that is suffering from maze dysfunction, it makes sense to bypass the existing systems entirely. Thus, move fast, break things.
Interesting point.
The tech industry also exhibits some very maze-like behaviors of its own, but it takes a different form. I am unlikely to be the best person to tackle those details, as others have better direct experience, and I will not attempt to tackle them here and now.
Other mazes.
The next section will ask why it was different in the past, what the causes are in general, and whether we can duplicate past conditions in good ways.
I look forward to the next post!
*per this quote:
These dynamics are the default result of large organizations. There is continuous pressure over time pushing towards such outcomes.
How can object level results be used as metrics, instead of proxies? (And how can ‘metrics being gamed’ be measured, particularly automatically?)
The chapter called “The Goodhard Trap” seems to be about this being principly impossible. Anything you use to measure is by it’s nature a proxy and subject to Goodharting. The map is not the territory.
The map is not the territory. In the process of measurement the deaths due to a pesticide you need a complex model about causality. That model means you have an abstraction.
If you get your drug unblinded by giving it strong side effects it will perform better against placebo. It’s a way to Goodhart the gold standard in our way to establish the causality of whether a drug helps a patient.
Any model of the causality of deaths due to your pesticide will be subject to Goodharting.
You do it to the extend that you have a causal model in your head that links the two. If you take the issue of toxic pesticides, new pesticides got used and a lot of our bees died. Whether or not there’s a correlation is subject to public debate. That’s how real-world examples look like.
I really like this post.
TL:DR;
Expanding on what these are like might be useful (as that isn’t the default*), and it might help people if they know what that looks like/have an example to work from. (This is intended as an indication of interest, not criticism.)
Long:
opportunity = benefit?
How can object level results be used as metrics, instead of proxies? (And how can ‘metrics being gamed’ be measured, particularly automatically?)
Interesting point.
Other mazes.
I look forward to the next post!
*per this quote:
The chapter called “The Goodhard Trap” seems to be about this being principly impossible. Anything you use to measure is by it’s nature a proxy and subject to Goodharting. The map is not the territory.
Thought Experiments:
A plane is produced. The plane is flown without incident and no one dies. (This does not make the news.)
A toxic pesticide is produced, and used widely. Consequently, every living thing on the surface of the Earth dies.
These seem like object level results, and do seem measurable. So it seems that things can be measured.
The map is not the territory. In the process of measurement the deaths due to a pesticide you need a complex model about causality. That model means you have an abstraction.
If you get your drug unblinded by giving it strong side effects it will perform better against placebo. It’s a way to Goodhart the gold standard in our way to establish the causality of whether a drug helps a patient.
Any model of the causality of deaths due to your pesticide will be subject to Goodharting.
Why? Doesn’t goodharting require optimization?
Suppose a) the world was nuked, and b) everyone died. Would you call A the cause of B?
You do it to the extend that you have a causal model in your head that links the two. If you take the issue of toxic pesticides, new pesticides got used and a lot of our bees died. Whether or not there’s a correlation is subject to public debate. That’s how real-world examples look like.
Suppose a) the world was nuked, and b) everyone died. Would you call A the cause of B?
Editing posts afterwards to remove statements is not a great way to have rational debate.
It wasn’t removed, it was moved.
Suppose we jettisoned causality. What exactly do you think can, and cannot, be measured?