isn’t AI all precisely about accumulating knowledge capital back into the form of ownable, material stuff?
Umm, no? Current AI seems much more about creating, distilling, and distributing knowledge capital in ways that are NOT exclusive or limited to the AI or it’s controllers.
“Creating” is a big word. Right now I wouldn’t say AI straight up creates knowledge. It creates new content, but the knowledge (e.g. the styles and techniques used in art) all come from its training data set. Essentially, the human capital you mention, which allows an artist to be paid for their work, or a programmer’s skill, is what the AI captures from examples of that work and then is able to apply in different contexts. If your argument is “but knowledge workers have their own knowledge capital!”, then AI is absolutely set to destroy that. In some cases this might have overall positive effects on society (e.g. AI medicine would likely be able to save more lives due to being cheaper and more available), but it’s moot to argue this isn’t about making those non-transferrable skills, in fact, transferrable (in the form of packaging them inside a tool that anyone can use).
And the AIs are definitely exclusive to their controllers mostly? Just because OpenAI puts its model on the internet for us to use doesn’t mean the model is now ours. They have the source, they have the weights. It’s theirs. Same goes for many others (not LLaMa, but no thanks to Meta). And I see the dangers in open sourced AI too, that’s a different can of worms. But closed source absolutely means the owner retains control. If they offer it for free, you grow dependent on it, and then one day decide to make it paywalled, you’ll have to pay. Because it’s theirs, and always was.
The knowledge isn’t the capital. The ability to actually accomplish something with the knowledge is the actual value. ChatGPT and other LLMs make existing knowledge far more accessible and digestible for many, allowing humans to apply that knowledge for their own use.
The model is proprietary and owned (though perhaps it’s existence makes others cheaper to create), but the output, which is it’s primary value, is available very cheaply.
The output is just that. The model allows you to create endless output. The knowledge needed to operate an art generator has nothing to do with art and is so basic and widespread a child can do it: just tell it what you want it to draw. There may be a few quirks to prompting but it’s not something even remotely comparable to the complexity of actually making art by yourself. No matter how you look at it, the model is the “means of production” here. The prompter does roughly what a commissioner would, so the model replaces entirely the technical expertise of the artist.
Umm, no? Current AI seems much more about creating, distilling, and distributing knowledge capital in ways that are NOT exclusive or limited to the AI or it’s controllers.
“Creating” is a big word. Right now I wouldn’t say AI straight up creates knowledge. It creates new content, but the knowledge (e.g. the styles and techniques used in art) all come from its training data set. Essentially, the human capital you mention, which allows an artist to be paid for their work, or a programmer’s skill, is what the AI captures from examples of that work and then is able to apply in different contexts. If your argument is “but knowledge workers have their own knowledge capital!”, then AI is absolutely set to destroy that. In some cases this might have overall positive effects on society (e.g. AI medicine would likely be able to save more lives due to being cheaper and more available), but it’s moot to argue this isn’t about making those non-transferrable skills, in fact, transferrable (in the form of packaging them inside a tool that anyone can use).
And the AIs are definitely exclusive to their controllers mostly? Just because OpenAI puts its model on the internet for us to use doesn’t mean the model is now ours. They have the source, they have the weights. It’s theirs. Same goes for many others (not LLaMa, but no thanks to Meta). And I see the dangers in open sourced AI too, that’s a different can of worms. But closed source absolutely means the owner retains control. If they offer it for free, you grow dependent on it, and then one day decide to make it paywalled, you’ll have to pay. Because it’s theirs, and always was.
The knowledge isn’t the capital. The ability to actually accomplish something with the knowledge is the actual value. ChatGPT and other LLMs make existing knowledge far more accessible and digestible for many, allowing humans to apply that knowledge for their own use.
The model is proprietary and owned (though perhaps it’s existence makes others cheaper to create), but the output, which is it’s primary value, is available very cheaply.
The output is just that. The model allows you to create endless output. The knowledge needed to operate an art generator has nothing to do with art and is so basic and widespread a child can do it: just tell it what you want it to draw. There may be a few quirks to prompting but it’s not something even remotely comparable to the complexity of actually making art by yourself. No matter how you look at it, the model is the “means of production” here. The prompter does roughly what a commissioner would, so the model replaces entirely the technical expertise of the artist.