One reason why artificial intelligence might be more useful than a human for some service is because artificial intelligence is software, and therefore you can copy-paste it for every service that we might want in an industry.
Recruiting and training humans takes time, whereas if you already have an ML model that performs well on a given task, you only need to acquire the relevant hardware to run the model. If hardware is cheap enough, I can see how using artificial intelligence could be much cheaper than spending money on {training + recruiting + wages} for a human. Automation in jobs such as audio transcription exemplify this trend—although I think the curve for automation is smooth as the software services require continuously less supervision over time as they improve.
Recruiting and training humans takes time, whereas if you already have an ML model that performs well on a given task, you only need to acquire the relevant hardware to run the model. If hardware is cheap enough, I can see how using artificial intelligence could be much cheaper than spending money on {training + recruiting + wages} for a human. Automation in jobs such as audio transcription exemplify this trend -- although I think the curve for automation is smooth as the software services require continuously less supervision over time as they improve.
In theory I agree with this, in practice:
It seems that the scope of ML has remained stable over the last 40 years or so (various NLP tasks, image classification and element outlining/labeling, numerical equation construction to predict a category/number… with added generative models for images that seem to have only recently gained interest).
In spite of the reduced scope of tasks it seems that the amount of people working on maintaining ML infrastructure and working in ML research is increasing.
A specialized company seems to pop up in every field from cabbage maturation detection to dog breed validation with it’s dozens of employees our of which at least a few are actually responsible for the task of copy-pasting code from github, and often enough they seem to fail at it or perform unreasonably badly.
Ever had to figure out why the specific cudnn/pytroch/tensorflow setup on a given environment is not working ?
Granted, again, I do agree in theory with your point. I don’t think my argument relies on replication cost. But I can’t see a future where replication costs are not a huge issue the same way I can’t see a future where everyone agrees on {X}, it’s no theoretically impossible, far from it, but technical over-complexity and competing near-equivalent standards is an issue with social roots that humans can’t fix.
One reason why artificial intelligence might be more useful than a human for some service is because artificial intelligence is software, and therefore you can copy-paste it for every service that we might want in an industry.
Recruiting and training humans takes time, whereas if you already have an ML model that performs well on a given task, you only need to acquire the relevant hardware to run the model. If hardware is cheap enough, I can see how using artificial intelligence could be much cheaper than spending money on {training + recruiting + wages} for a human. Automation in jobs such as audio transcription exemplify this trend—although I think the curve for automation is smooth as the software services require continuously less supervision over time as they improve.
In theory I agree with this, in practice:
It seems that the scope of ML has remained stable over the last 40 years or so (various NLP tasks, image classification and element outlining/labeling, numerical equation construction to predict a category/number… with added generative models for images that seem to have only recently gained interest).
In spite of the reduced scope of tasks it seems that the amount of people working on maintaining ML infrastructure and working in ML research is increasing.
A specialized company seems to pop up in every field from cabbage maturation detection to dog breed validation with it’s dozens of employees our of which at least a few are actually responsible for the task of copy-pasting code from github, and often enough they seem to fail at it or perform unreasonably badly.
Ever had to figure out why the specific cudnn/pytroch/tensorflow setup on a given environment is not working ?
Granted, again, I do agree in theory with your point. I don’t think my argument relies on replication cost. But I can’t see a future where replication costs are not a huge issue the same way I can’t see a future where everyone agrees on {X}, it’s no theoretically impossible, far from it, but technical over-complexity and competing near-equivalent standards is an issue with social roots that humans can’t fix.