I’m confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn’t with AIs generating ‘art’ it’s that some artists have historically been able to make a living by creating commercial art, and AI’s being capable of generating commercial art threatens the livelihood of those human artists.
There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.
Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.
I’m not talking about art per se though, I’m talking about things like the legal issues surrounding the training of models using copyrighted art. If copyright is meant to foster human creativity, it’s perfectly reasonable to say that the allowance to enjoy and remix works only applies to humans, not privately-owned AIs that can automate and parallelize the process to superhuman scale. If I own an AI trained on a trillion copyrighted images I effectively own data that has sort-of-a-copy of those images inside.
I don’t think AI art generation is necessarily bad overall, though I do think that we should be more wary of it for various reasons—mostly that this side of straight-up AGI, I think the limits of art generators mean we risk replacing the entire lower tier of human artists with a legion of poor imitations unable to renew their style or progress, leading to a situation where no one can support themselves doing art and thus train long enough to reach the higher tiers of mastery. Your “everyone does as they prefer” reasoning isn’t perfect because in practice these seismic changes in the market would affect others too. But besides that, my point is more generally that regardless of your take on the art itself, the generators shouldn’t be treated as human artists (for example, neither DALL-E nor Open AI should hold a copyright over the generated images).
Do I understand it correctly that if the AI outcompetes mediocre artists, there will be no more great artists, because each great artist was a mediocre artist first?
By the same logic, does the fact that you can buy mediocre food in any supermarket mean that there are no great chefs anymore? (Because no one would hire a person who produces worse food that the supermarkets, so the beginners have nowhere to gain experience.)
Stack Exchange + Google can replace a poor software developer, so we will not have great software developers?
I think it depends on the thoroughness of the replacement. Cooking is still a useful life skill, economics of it are such that you can in fact cook for your own. But while someone probably still practices calligraphy and miniature for the heck of it, how many great miniaturists have there been since the printing press drove ’em out of a job? Do you know anyone who could copy an entire manuscript in pretty print?
Obviously this isn’t necessarily a tragedy, some skills just stop being useful so we move on. But “art” is a much broader category than a single specific skill. And you will notice that since photography was born, for example, figurative arts have been taking a significant hit—replaced by other forms. The question is whether you can keep find replacements or if at some point the well dries up and the quality of human art takes a dive because all that’s left to do for humans alone is simply not that interesting.
Stack Exchange + Google can replace a poor software developer, so we will not have great software developers?
Those things alone can’t. GPT-4 or future LLMs might, and yes, I’d say that would be a problem! People are already seeing how the younger generations, who have grown up using more polished and user-friendly UIs, have a hard time grasping how a file system works, as those mechanisms are hidden from them. Spend long enough with the “you tell the computer what to do and it does it for you”, and almost no one will seek the skill to write programs themselves. Which is all fine and dandy as long as the LLM works, but it makes double-checking their code when it’s really critical a lot harder.
I’m confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn’t with AIs generating ‘art’ it’s that some artists have historically been able to make a living by creating commercial art, and AI’s being capable of generating commercial art threatens the livelihood of those human artists.
There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.
Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.
I’m not talking about art per se though, I’m talking about things like the legal issues surrounding the training of models using copyrighted art. If copyright is meant to foster human creativity, it’s perfectly reasonable to say that the allowance to enjoy and remix works only applies to humans, not privately-owned AIs that can automate and parallelize the process to superhuman scale. If I own an AI trained on a trillion copyrighted images I effectively own data that has sort-of-a-copy of those images inside.
I don’t think AI art generation is necessarily bad overall, though I do think that we should be more wary of it for various reasons—mostly that this side of straight-up AGI, I think the limits of art generators mean we risk replacing the entire lower tier of human artists with a legion of poor imitations unable to renew their style or progress, leading to a situation where no one can support themselves doing art and thus train long enough to reach the higher tiers of mastery. Your “everyone does as they prefer” reasoning isn’t perfect because in practice these seismic changes in the market would affect others too. But besides that, my point is more generally that regardless of your take on the art itself, the generators shouldn’t be treated as human artists (for example, neither DALL-E nor Open AI should hold a copyright over the generated images).
Do I understand it correctly that if the AI outcompetes mediocre artists, there will be no more great artists, because each great artist was a mediocre artist first?
By the same logic, does the fact that you can buy mediocre food in any supermarket mean that there are no great chefs anymore? (Because no one would hire a person who produces worse food that the supermarkets, so the beginners have nowhere to gain experience.)
Stack Exchange + Google can replace a poor software developer, so we will not have great software developers?
I think it depends on the thoroughness of the replacement. Cooking is still a useful life skill, economics of it are such that you can in fact cook for your own. But while someone probably still practices calligraphy and miniature for the heck of it, how many great miniaturists have there been since the printing press drove ’em out of a job? Do you know anyone who could copy an entire manuscript in pretty print?
Obviously this isn’t necessarily a tragedy, some skills just stop being useful so we move on. But “art” is a much broader category than a single specific skill. And you will notice that since photography was born, for example, figurative arts have been taking a significant hit—replaced by other forms. The question is whether you can keep find replacements or if at some point the well dries up and the quality of human art takes a dive because all that’s left to do for humans alone is simply not that interesting.
Those things alone can’t. GPT-4 or future LLMs might, and yes, I’d say that would be a problem! People are already seeing how the younger generations, who have grown up using more polished and user-friendly UIs, have a hard time grasping how a file system works, as those mechanisms are hidden from them. Spend long enough with the “you tell the computer what to do and it does it for you”, and almost no one will seek the skill to write programs themselves. Which is all fine and dandy as long as the LLM works, but it makes double-checking their code when it’s really critical a lot harder.