They aren’t the same thing? I mean for the topics of interest, AI alignment, there is nothing to learn from other humans or improve on past a certain baseline level of knowledge. Past a certain point reading papers on it I suspect your learning curve would go negative because you’re just learning on errors people before you made.
Improving past that point has to be designing and executing high knowledge gain experiments, and that’s I/o and funding bound.
I would argue that the above is the rule for anything humans cannot already do.
Were you thinking of skills where it’s a confined objective task? Like StarCraft 2 or Go? The former being strongly I/o bound.
I’m very confident we’re talking past each other, and I’m not in the mood to figure out what we actually disagree on. I think we’re using “i/o” differently, and I claim your use permits improvements to the process, which contradicts your argument.
They aren’t the same thing? I mean for the topics of interest, AI alignment, there is nothing to learn from other humans or improve on past a certain baseline level of knowledge. Past a certain point reading papers on it I suspect your learning curve would go negative because you’re just learning on errors people before you made.
Improving past that point has to be designing and executing high knowledge gain experiments, and that’s I/o and funding bound.
I would argue that the above is the rule for anything humans cannot already do.
Were you thinking of skills where it’s a confined objective task? Like StarCraft 2 or Go? The former being strongly I/o bound.
I’m very confident we’re talking past each other, and I’m not in the mood to figure out what we actually disagree on. I think we’re using “i/o” differently, and I claim your use permits improvements to the process, which contradicts your argument.