After thinking about this more, I think that the 80⁄20 rule is a good heuristic before optimization. The whole point of optimizing is to pluck the low hanging fruit, exploit the 80⁄20 rule, eliminate the alpha, and end up with a system where remaining variation is the result of small contributing factors that aren’t worth optimizing anymore.
When we find systems in the wild where the 80⁄20 rule doesn’t seem to apply, we are often considering a system that’s been optimized for the result. Most phenotypes are polygenic, and this is because evolution is optimizing for advantageous phenotypes. The premise of “atomic habits” is that the accumulation of small habit wins compounds over time, and again, this is because we already do a lot of optimizing of our habits and routines.
It is in domains where there’s less pressure or ability to optimize for a specific outcome that the 80⁄20 rule will be most in force.
It’s interesting to consider how this jives with “you can only have one top priority.” OpenAI clearly has capabilities enhancement as its top priority. How do we know this? Because there are clearly huge wins available to it if it was optimizing for safety, and no obvious huge wins to improve capabilities. That means they’re optimizing for capabilities.
Agreed. And even after you’ve plucked all the low-hanging fruit, the high-hanging fruit may still offer the greatest marginal gains, justifying putting effort into small-improvement optimizations. This is particularly true if there are high switching costs to transition between top priorities/values in a large organization. Even if OpenAI is sincere in its “capabilities and alignment go hand in hand” thesis, they may find that their association with Microsoft imposes huge or insurmountable switching costs, even when they think the time is right to stop prioritizing capabilities and start directly prioritizing alignment.
And of course, the fact that they’ve associated with business that cares for nothing but profit is another sign OpenAI’s priority was capabilities pure and simple, all along. It would have been relatively easy to preserve their option to switch to a capabilities priority if they’d remained independent, and I predict they will not be able to do so, could foresee this, and didn’t care as much as they cared about impressive technology and making money.
After thinking about this more, I think that the 80⁄20 rule is a good heuristic before optimization. The whole point of optimizing is to pluck the low hanging fruit, exploit the 80⁄20 rule, eliminate the alpha, and end up with a system where remaining variation is the result of small contributing factors that aren’t worth optimizing anymore.
When we find systems in the wild where the 80⁄20 rule doesn’t seem to apply, we are often considering a system that’s been optimized for the result. Most phenotypes are polygenic, and this is because evolution is optimizing for advantageous phenotypes. The premise of “atomic habits” is that the accumulation of small habit wins compounds over time, and again, this is because we already do a lot of optimizing of our habits and routines.
It is in domains where there’s less pressure or ability to optimize for a specific outcome that the 80⁄20 rule will be most in force.
It’s interesting to consider how this jives with “you can only have one top priority.” OpenAI clearly has capabilities enhancement as its top priority. How do we know this? Because there are clearly huge wins available to it if it was optimizing for safety, and no obvious huge wins to improve capabilities. That means they’re optimizing for capabilities.
This is also my current heuristic, and the main way that I now disagree with the post.
That doesn’t always mean that you can’t make big improvements, because the goal it was optimized for may not fit your goals.
Agreed. And even after you’ve plucked all the low-hanging fruit, the high-hanging fruit may still offer the greatest marginal gains, justifying putting effort into small-improvement optimizations. This is particularly true if there are high switching costs to transition between top priorities/values in a large organization. Even if OpenAI is sincere in its “capabilities and alignment go hand in hand” thesis, they may find that their association with Microsoft imposes huge or insurmountable switching costs, even when they think the time is right to stop prioritizing capabilities and start directly prioritizing alignment.
And of course, the fact that they’ve associated with business that cares for nothing but profit is another sign OpenAI’s priority was capabilities pure and simple, all along. It would have been relatively easy to preserve their option to switch to a capabilities priority if they’d remained independent, and I predict they will not be able to do so, could foresee this, and didn’t care as much as they cared about impressive technology and making money.