This seems to be an example of the “rounding to zero” fallacy. Requiring fewer workers does not mean no workers. Which, in turn, means that the problems you’re concerned about have ALREADY happened with 10:1 or 100:1 efficiency improvements, and going to 1000:1 is just a continuation.
The other thing that’s missing from your model is scarcity of capital. The story of the last ~70 years is that “human capital” or “knowledge capital” is a thing, and it’s not very easily owned exclusively—workers do in fact own (parts of) the means of production, in that the skills and knowledge are difficult to acquire but not really transferrable or accumulated in the same way as land or equipment.
I mean, isn’t AI all precisely about accumulating knowledge capital back into the form of ownable, material stuff? It’s in fact the main real source of complaint artists have about Dall-E, Midjourney etc. - it inferred their knowledge from their work and made it infinitely reproducible. GPTs do something like it with coding.
I understand your point about this being possibly just what you get if you make the scenario a little bit more extreme than it is, but I think this depends on what AGI is like. If AGI is human-ish in skills but still dumb or unreliable enough that it needs supervision, then sure, you’re right. But if it actually lives up to the promise of being precisely as good as human workers, then what’s left for humans to contribute? AGI that can’t accumulate know-how isn’t nearly good enough. AGI that can will do so in a matter of days.
isn’t AI all precisely about accumulating knowledge capital back into the form of ownable, material stuff?
Umm, no? Current AI seems much more about creating, distilling, and distributing knowledge capital in ways that are NOT exclusive or limited to the AI or it’s controllers.
“Creating” is a big word. Right now I wouldn’t say AI straight up creates knowledge. It creates new content, but the knowledge (e.g. the styles and techniques used in art) all come from its training data set. Essentially, the human capital you mention, which allows an artist to be paid for their work, or a programmer’s skill, is what the AI captures from examples of that work and then is able to apply in different contexts. If your argument is “but knowledge workers have their own knowledge capital!”, then AI is absolutely set to destroy that. In some cases this might have overall positive effects on society (e.g. AI medicine would likely be able to save more lives due to being cheaper and more available), but it’s moot to argue this isn’t about making those non-transferrable skills, in fact, transferrable (in the form of packaging them inside a tool that anyone can use).
And the AIs are definitely exclusive to their controllers mostly? Just because OpenAI puts its model on the internet for us to use doesn’t mean the model is now ours. They have the source, they have the weights. It’s theirs. Same goes for many others (not LLaMa, but no thanks to Meta). And I see the dangers in open sourced AI too, that’s a different can of worms. But closed source absolutely means the owner retains control. If they offer it for free, you grow dependent on it, and then one day decide to make it paywalled, you’ll have to pay. Because it’s theirs, and always was.
The knowledge isn’t the capital. The ability to actually accomplish something with the knowledge is the actual value. ChatGPT and other LLMs make existing knowledge far more accessible and digestible for many, allowing humans to apply that knowledge for their own use.
The model is proprietary and owned (though perhaps it’s existence makes others cheaper to create), but the output, which is it’s primary value, is available very cheaply.
The output is just that. The model allows you to create endless output. The knowledge needed to operate an art generator has nothing to do with art and is so basic and widespread a child can do it: just tell it what you want it to draw. There may be a few quirks to prompting but it’s not something even remotely comparable to the complexity of actually making art by yourself. No matter how you look at it, the model is the “means of production” here. The prompter does roughly what a commissioner would, so the model replaces entirely the technical expertise of the artist.
This seems to be an example of the “rounding to zero” fallacy. Requiring fewer workers does not mean no workers. Which, in turn, means that the problems you’re concerned about have ALREADY happened with 10:1 or 100:1 efficiency improvements, and going to 1000:1 is just a continuation.
The other thing that’s missing from your model is scarcity of capital. The story of the last ~70 years is that “human capital” or “knowledge capital” is a thing, and it’s not very easily owned exclusively—workers do in fact own (parts of) the means of production, in that the skills and knowledge are difficult to acquire but not really transferrable or accumulated in the same way as land or equipment.
I mean, isn’t AI all precisely about accumulating knowledge capital back into the form of ownable, material stuff? It’s in fact the main real source of complaint artists have about Dall-E, Midjourney etc. - it inferred their knowledge from their work and made it infinitely reproducible. GPTs do something like it with coding.
I understand your point about this being possibly just what you get if you make the scenario a little bit more extreme than it is, but I think this depends on what AGI is like. If AGI is human-ish in skills but still dumb or unreliable enough that it needs supervision, then sure, you’re right. But if it actually lives up to the promise of being precisely as good as human workers, then what’s left for humans to contribute? AGI that can’t accumulate know-how isn’t nearly good enough. AGI that can will do so in a matter of days.
Umm, no? Current AI seems much more about creating, distilling, and distributing knowledge capital in ways that are NOT exclusive or limited to the AI or it’s controllers.
“Creating” is a big word. Right now I wouldn’t say AI straight up creates knowledge. It creates new content, but the knowledge (e.g. the styles and techniques used in art) all come from its training data set. Essentially, the human capital you mention, which allows an artist to be paid for their work, or a programmer’s skill, is what the AI captures from examples of that work and then is able to apply in different contexts. If your argument is “but knowledge workers have their own knowledge capital!”, then AI is absolutely set to destroy that. In some cases this might have overall positive effects on society (e.g. AI medicine would likely be able to save more lives due to being cheaper and more available), but it’s moot to argue this isn’t about making those non-transferrable skills, in fact, transferrable (in the form of packaging them inside a tool that anyone can use).
And the AIs are definitely exclusive to their controllers mostly? Just because OpenAI puts its model on the internet for us to use doesn’t mean the model is now ours. They have the source, they have the weights. It’s theirs. Same goes for many others (not LLaMa, but no thanks to Meta). And I see the dangers in open sourced AI too, that’s a different can of worms. But closed source absolutely means the owner retains control. If they offer it for free, you grow dependent on it, and then one day decide to make it paywalled, you’ll have to pay. Because it’s theirs, and always was.
The knowledge isn’t the capital. The ability to actually accomplish something with the knowledge is the actual value. ChatGPT and other LLMs make existing knowledge far more accessible and digestible for many, allowing humans to apply that knowledge for their own use.
The model is proprietary and owned (though perhaps it’s existence makes others cheaper to create), but the output, which is it’s primary value, is available very cheaply.
The output is just that. The model allows you to create endless output. The knowledge needed to operate an art generator has nothing to do with art and is so basic and widespread a child can do it: just tell it what you want it to draw. There may be a few quirks to prompting but it’s not something even remotely comparable to the complexity of actually making art by yourself. No matter how you look at it, the model is the “means of production” here. The prompter does roughly what a commissioner would, so the model replaces entirely the technical expertise of the artist.