No matter how much I try, I just cannot force myself to buy the premise of replacement of human labor as a reasonable goal. Consider the apocryphal quote:
If I had asked people what they wanted, they would have said faster horses. –Henry Ford
I’m clearly in the wrong here, because every CEO who talks about the subject talks about faster horses[1], and here we have Mechanize whose goal is to build faster horses, and here is the AI community concerned about the severe societal impacts of digital horse shit.
Why, exactly, are all these people who are into accelerating technical development and automation of the economy working so hard at cramming the AI into the shape of a horse?
Because that’s what investors want. From observations at my workplace at a B2C software company[1] and from what I hear from others in the space, there is tremendous pressure from investors to incorporate AI and particularly “AI agents” in whatever way is possible, whether or not it makes sense given the context. Investors are enthusiastic about “a cheap drop-in replacement for a human worker” in a way that they are not for “tools which make employees better at some tasks”
The CEOs are reading the script they need to read to make their boards happy,. That script talks about faster horses, so by golly their companies have the fastest horses to ever horse.
Meanwhile you have tools like Copilot and Cursor which allow workers to vastly amplify their work but not fully offload it, and you have structured outputs from LLMs allowing for conversion of unstructured to structured data at incredible scales. But talking about your adoption of those tools will not get you funding, and so you don’t hear as much about that style of tooling.
I second what both @faul_sname and @Noosphere89 said. I’d add: Consider ease and speed of integration. Organizational inertia can be a very big bottleneck, and companies often think in FTEs. Ultimately, no, I don’t think it makes sense to have anything like 1:1 replacement of human workers with AI agents. But, as a process occurring in stages over time, if you can do that, then you get a huge up-front payoff, and you can use part of the payoff to do the work of refactoring tasks/jobs/products/companies/industries to better take advantage of what else AI lets you do differently or instead.
“Ok, because I have Replacement Agent AI v1 I was able to fire all the people with job titles A-D, now I can hire a dozen people to figure out how to use AI to do the same for job titles E through Q, and then another dozen to reorganize all the work that was being done by A-Q into more effective chunks appropriate for the AI, and then three AI engineers to figure out how to automate the two dozen people I just had to hire...”
This is basically because of the value of the long tail.
Automating 90% or 50% of your job is not enough to bring in lots of the value proposition of AI, because then the human becomes a bottleneck, which becomes especially severe in cases requiring high speed or lots of context.
No matter how much I try, I just cannot force myself to buy the premise of replacement of human labor as a reasonable goal. Consider the apocryphal quote:
I’m clearly in the wrong here, because every CEO who talks about the subject talks about faster horses[1], and here we have Mechanize whose goal is to build faster horses, and here is the AI community concerned about the severe societal impacts of digital horse shit.
Why, exactly, are all these people who are into accelerating technical development and automation of the economy working so hard at cramming the AI into the shape of a horse?
For clarity, faster horses here is a metaphor for the AI just replacing human workers at their existing jobs.
Because that’s what investors want. From observations at my workplace at a B2C software company[1] and from what I hear from others in the space, there is tremendous pressure from investors to incorporate AI and particularly “AI agents” in whatever way is possible, whether or not it makes sense given the context. Investors are enthusiastic about “a cheap drop-in replacement for a human worker” in a way that they are not for “tools which make employees better at some tasks”
The CEOs are reading the script they need to read to make their boards happy,. That script talks about faster horses, so by golly their companies have the fastest horses to ever horse.
Meanwhile you have tools like Copilot and Cursor which allow workers to vastly amplify their work but not fully offload it, and you have structured outputs from LLMs allowing for conversion of unstructured to structured data at incredible scales. But talking about your adoption of those tools will not get you funding, and so you don’t hear as much about that style of tooling.
Obligatory “Views expressed are my own and do not necessarily reflect those of my employer”
I second what both @faul_sname and @Noosphere89 said. I’d add: Consider ease and speed of integration. Organizational inertia can be a very big bottleneck, and companies often think in FTEs. Ultimately, no, I don’t think it makes sense to have anything like 1:1 replacement of human workers with AI agents. But, as a process occurring in stages over time, if you can do that, then you get a huge up-front payoff, and you can use part of the payoff to do the work of refactoring tasks/jobs/products/companies/industries to better take advantage of what else AI lets you do differently or instead.
“Ok, because I have Replacement Agent AI v1 I was able to fire all the people with job titles A-D, now I can hire a dozen people to figure out how to use AI to do the same for job titles E through Q, and then another dozen to reorganize all the work that was being done by A-Q into more effective chunks appropriate for the AI, and then three AI engineers to figure out how to automate the two dozen people I just had to hire...”
This is basically because of the value of the long tail.
Automating 90% or 50% of your job is not enough to bring in lots of the value proposition of AI, because then the human becomes a bottleneck, which becomes especially severe in cases requiring high speed or lots of context.
@johnswentworth talks about the issue here:
https://www.lesswrong.com/posts/Nbcs5Fe2cxQuzje4K/value-of-the-long-tail