So if 10x software engineers exist, they develop architecture and interfaces and use patterns where over time 1⁄10 the total amount of human time per feature is used. Bad code consumes enormous amounts of time to deal with, where bad architecture that blocks adding new features or makes localizing a bug difficult would be the worst.
But to be this good mostly comes from knowledge, learned either in school or over a lot of time from doing it the wrong way and learning how it fails.
It’s not an intelligence thing. A genius swe can locate a bug on a hunch, a 10x swe would write code where the bug doesn’t exist or is obvious every time.
A lot of the other examples I gave I have the impression that no, I/o is everything. Finding the mass of the electron was done with laborious effort over many hours, most of it dealing with the equipment. Nobody can cut 10 times faster in surgery, hands can’t run that quickly. Same with fixing a car. Cern scientists obviously are limited by all sorts of equipment issues. Same with AI research—the limiting factor has always been equipment from the start. “Equipment issues” mean either you get your hands dirty fixing it yourself—that’s I/O or spare parts bound—or you tell someone else to do it and their time to fix is bound the same way.
Some of the best scientists in history could fix equipment issues themselves, this likely broadened their skill base and made their later discoveries feasible.
They aren’t the same thing? I mean for the topics of interest, AI alignment, there is nothing to learn from other humans or improve on past a certain baseline level of knowledge. Past a certain point reading papers on it I suspect your learning curve would go negative because you’re just learning on errors people before you made.
Improving past that point has to be designing and executing high knowledge gain experiments, and that’s I/o and funding bound.
I would argue that the above is the rule for anything humans cannot already do.
Were you thinking of skills where it’s a confined objective task? Like StarCraft 2 or Go? The former being strongly I/o bound.
I’m very confident we’re talking past each other, and I’m not in the mood to figure out what we actually disagree on. I think we’re using “i/o” differently, and I claim your use permits improvements to the process, which contradicts your argument.
So if 10x software engineers exist, they develop architecture and interfaces and use patterns where over time 1⁄10 the total amount of human time per feature is used. Bad code consumes enormous amounts of time to deal with, where bad architecture that blocks adding new features or makes localizing a bug difficult would be the worst.
But to be this good mostly comes from knowledge, learned either in school or over a lot of time from doing it the wrong way and learning how it fails.
It’s not an intelligence thing. A genius swe can locate a bug on a hunch, a 10x swe would write code where the bug doesn’t exist or is obvious every time.
A lot of the other examples I gave I have the impression that no, I/o is everything. Finding the mass of the electron was done with laborious effort over many hours, most of it dealing with the equipment. Nobody can cut 10 times faster in surgery, hands can’t run that quickly. Same with fixing a car. Cern scientists obviously are limited by all sorts of equipment issues. Same with AI research—the limiting factor has always been equipment from the start. “Equipment issues” mean either you get your hands dirty fixing it yourself—that’s I/O or spare parts bound—or you tell someone else to do it and their time to fix is bound the same way.
Some of the best scientists in history could fix equipment issues themselves, this likely broadened their skill base and made their later discoveries feasible.
You are operating on the wrong level of analysis here. The question is about skill improvement, not execution.
They aren’t the same thing? I mean for the topics of interest, AI alignment, there is nothing to learn from other humans or improve on past a certain baseline level of knowledge. Past a certain point reading papers on it I suspect your learning curve would go negative because you’re just learning on errors people before you made.
Improving past that point has to be designing and executing high knowledge gain experiments, and that’s I/o and funding bound.
I would argue that the above is the rule for anything humans cannot already do.
Were you thinking of skills where it’s a confined objective task? Like StarCraft 2 or Go? The former being strongly I/o bound.
I’m very confident we’re talking past each other, and I’m not in the mood to figure out what we actually disagree on. I think we’re using “i/o” differently, and I claim your use permits improvements to the process, which contradicts your argument.