I largely don’t think we’re disagreeing? My point didn’t depend on a distinction between ‘raw’ capabilities vs ‘possible right now with enough arranging’ capabilities, and was mostly: “I don’t see what you could actually delegate right now, as opposed to operating in the normal paradigm of ai co-work the OP is already saying they do (chat, copilot, imagegen)”, and then your personal example is detailing why you couldn’t currently delegate a task. Sounds like agreement.
Also I didn’t really consider your example of:
> “email your current blog post draft to the assistant for copyediting”.
to be outside the paradigm of AI co-work the OP is already doing, even if it saves them time. Scaling up this kind of work to the point of $1k would seem pretty difficult and also outside what I took to be their question, since this amounts to “just work a lot more yourself, and thus the proportion of work you currently use AI for will go up till you hit $1k”. That’s a lot of API credits for such normal personal use.
…
But back to your example, I do question just how much of a leap of insight/connection would be necessary to write the standard Gwern mini article. Maybe in this exact case you know there is enough latent insight/connection in your clippings/writings, and the LLM corpus, and possibly some rudimentary wikipedia/tool use, such that your prompt providing the cherry on top connecting idea (‘spontaneous biting is prey drive!‘) could actually produce a Gwern-approved mini-essay. You’d know the level of insight-leap for such articles better than I, but do you really think there’d be many such things within reach for very long? I’d argue an agent that could do this semi indefinitely, rather than just clearing your backlog of maybe like 20 such ideas, would be much more capable than we currently see, in terms of necessary ‘raw’ capability. But maybe I’m wrong and you regularly have ideas that sufficiently fit this pattern, where the bar to pass isn’t “be even close to as capable Gwern”, but: “there’s enough lying around to make the final connection, just write it up in the style of Gwern”.
Like clearly something that could actually write any gwern article would have at least your level of capability, and would foom or something similar; it’d be self sustaining. Instead what you’re describing is a setup where most of the insight, knowledge, and connection is already there, and is an instance of what I’d argue is a narrow band of possible tasks that could be delegated without necessitating {capability powerful enough to self sustain and maybe foom}. I don’t think this band is very wide; there’s not many tasks I can think of that fit this description. But I failed to think of your class of example, or eggsyntax’s below example of call center automation, so perhaps I’m simply blanking on others, and the band is wider than I thought.
But if not, then your original suggestion of, basically: “first think of what you could delegate to another human” seems a fraught starting point because the supermajority of such tasks would require capability sufficient for self sustainable ~foomy agents, but we don’t yet observe any such; our world would look very different.
I largely don’t think we’re disagreeing? My point didn’t depend on a distinction between ‘raw’ capabilities vs ‘possible right now with enough arranging’ capabilities, and was mostly: “I don’t see what you could actually delegate right now, as opposed to operating in the normal paradigm of ai co-work the OP is already saying they do (chat, copilot, imagegen)”, and then your personal example is detailing why you couldn’t currently delegate a task. Sounds like agreement.
Also I didn’t really consider your example of:
> “email your current blog post draft to the assistant for copyediting”.
to be outside the paradigm of AI co-work the OP is already doing, even if it saves them time. Scaling up this kind of work to the point of $1k would seem pretty difficult and also outside what I took to be their question, since this amounts to “just work a lot more yourself, and thus the proportion of work you currently use AI for will go up till you hit $1k”. That’s a lot of API credits for such normal personal use.
…
But back to your example, I do question just how much of a leap of insight/connection would be necessary to write the standard Gwern mini article. Maybe in this exact case you know there is enough latent insight/connection in your clippings/writings, and the LLM corpus, and possibly some rudimentary wikipedia/tool use, such that your prompt providing the cherry on top connecting idea (‘spontaneous biting is prey drive!‘) could actually produce a Gwern-approved mini-essay. You’d know the level of insight-leap for such articles better than I, but do you really think there’d be many such things within reach for very long? I’d argue an agent that could do this semi indefinitely, rather than just clearing your backlog of maybe like 20 such ideas, would be much more capable than we currently see, in terms of necessary ‘raw’ capability. But maybe I’m wrong and you regularly have ideas that sufficiently fit this pattern, where the bar to pass isn’t “be even close to as capable Gwern”, but: “there’s enough lying around to make the final connection, just write it up in the style of Gwern”.
Like clearly something that could actually write any gwern article would have at least your level of capability, and would foom or something similar; it’d be self sustaining. Instead what you’re describing is a setup where most of the insight, knowledge, and connection is already there, and is an instance of what I’d argue is a narrow band of possible tasks that could be delegated without necessitating {capability powerful enough to self sustain and maybe foom}. I don’t think this band is very wide; there’s not many tasks I can think of that fit this description. But I failed to think of your class of example, or eggsyntax’s below example of call center automation, so perhaps I’m simply blanking on others, and the band is wider than I thought.
But if not, then your original suggestion of, basically: “first think of what you could delegate to another human” seems a fraught starting point because the supermajority of such tasks would require capability sufficient for self sustainable ~foomy agents, but we don’t yet observe any such; our world would look very different.