My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.
I strongly expect, and would be outright shocked if it were otherwise, that we won’t end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we’ve made such a great dent in visual art already with image models that lack the intelligence of an average human.
Thus, it doesn’t matter in the least if it stifles human output, because the overwhelming majority of us who don’t rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.
To put money where my mouth is, I write a web serial, after years of world-building and abortive sketches in my notes, I realized that the release of GPT-4 meant that any benefit from my significantly above average ability to be a human writer was in jeopardy, if not now, then a handful of advances down the line. So my own work is more of a “I told you I was a good writer, before anyone can plausibly claim my work was penned by an AI” for street cred rather than a replacement for my day job.
If GPT-5 can write as well as I can, and emulate my favorite authors, or even better yet, pen novel novels (pun intended), then my minor distress at losing potential Patreon money is more than ameliorated by the fact I have a nigh-infinite number of good books to read! I spend a great deal more time reading the works of others than writing myself.
The same is true for my day job, being a doctor, I would look forward to being made obsolete, if only I had sufficient savings or a government I could comfortably rely on to institute UBI.
I would much prefer that we tax the fruits of automation to support us all when we’re inevitably obsolete rather than extend copyright law indefinitely into the future, or subject derivative works made by AI to the same constraints. The solution is to prepare our economies to support a ~100% non-productive human populace indefinitely, better preparing now than when we have no choice but to do so or let them starve to death.
I’m also an artist. My job involves a mix of graphic design and web development, and I make some income on the side from a Patreon supporting my personal work- all of which could be automated in the near future by generative AI. And I also think that’s a good thing.
Copyright has always been a necessary evil. The atmosphere of fear and uncertainty it creates around remixes and reinterpretations has held back art- consider, for example, how much worse modern music would be without samples, a rare case where artists operating in a legal grey area with respect to copyright became so common that artists lost their fear. That fear still persists in almost every other medium, however, forcing artists to constantly reinvent the wheel rather than iterating on success. Copyright also creates a really enormous amount of artificial scarcity- limiting peoples’ access to art to a level far below what we have the technical capacity to provide. All because nobody can figure out a better way of funding artists than granting lots of little monopolies.
Once our work is automated and all but free, however, we’ll have the option of abolishing copyright altogether. That would free artists to create whatever we’d like; free self-expression from technical barriers; free artistic culture from the distorting and wasteful influence of zero-sum status competition. Art, I suspect, will get much, much better- and as someone who loves art, that means a lot to me.
And as terrible as this could be for my career, spending my life working in a job that could be automated but isn’t would be as soul-crushing as being paid to dig holes and fill them in again. It would be an insultingly transparent facsimile of useful work. An offer of UBI, but only if I spend eight hours a day performing a ritual imitation of meaningful effort. No. If society wants to pay me for the loss of my profession, I won’t refuse, but if I have to go into construction or whatever to pay the bills while I wait to find out whether this is all going to lead to post-scarcity utopia or apocalypse, then so be it.
Could you explain your attitudes towards art and art culture more in depth and explain how exactly your opinions on AI art follow from those attitudes? For example, how much do you enjoy making art and how conditional is that enjoyment? How much do you care about self-expression, in what way? I’m asking because this analogy jumped out at me as a little suspicious:
And as terrible as this could be for my career, spending my life working in a job that could be automated but isn’t would be as soul-crushing as being paid to dig holes and fill them in again. It would be an insultingly transparent facsimile of useful work.
But creative work is not mechanical work, it can’t be automated that way, AI doesn’t replace you that way. AI doesn’t have the model of your brain, it can’t make the choices you would make. It replaces you by making something cheaper and on the same level of “quality”. It doesn’t automate your self-expression. If you care about self-expression, the possibility of AI doesn’t have to feel soul-crushing.
I apologize for sounding confrontational. You’re free to disagree with everything above. I just wanted to show that the question has a lot of potential nuances.
In that paragraph, I’m only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations. That kind of art isn’t about self-expression- it’s about communicating the client’s vision. Which is, admittedly, often a euphemism for “helping businesses win status signaling competitions”, but not always or entirely. Creating beautiful things and improving users’ experience is positive-sum, and something I take pride in.
Pretty soon, however, clients will be able to have the same sort of interactions with an AI that they have with me, and get better results. That means more of the positive-sum aspects of the work, with much less expenditure of resources- a very clear positive for society. If that’s prevented to preserve jobs like mine, then the jobs become a drain on society- no longer genuinely productive, and not something I could in good faith take pride in.
Artistic expression, of course, is something very different. I’m definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That’s not threatened at all by AI. In fact, I’ve really enjoyed mixing AI with traditional digital illustration recently. While I may go back to purely hand-drawn art for the challenge, AI in that context isn’t harming self-expression; it’s supporting it.
While it’s true that AI may threaten certain jobs that involve artistic self-expression (and probably my Patreon), I don’t think that’s actually going to result in less self-expression. As AI tools break down the technical barriers between imagination and final art piece, I think we’re going to see a lot more people expressing themselves through visual mediums.
Also, once AGI reaches and passes a human level, I’d be surprised if it wasn’t capable of some pretty profound and moving artistic self-expression in its own right. If it turns out that people are often more interested what minds like that have to say artistically than what other humans are creating, then so long as those AIs are reasonably well-aligned, I’m basically fine with that. Art has never really been about zero-sum competition.
Thank you for the answer, clarifies your opinion a lot!
Artistic expression, of course, is something very different. I’m definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That’s not threatened at all by AI.
I think there are some threats, at least hypothetical. For example, the “spam attack”. People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets “pruned” in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the “spam attack” is not really possible even if you have zero copyright laws.)
In general, I believe for culture to exist we need to respect the idea “there’s a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled” in some way. For example, maybe you shouldn’t use AI to “steal” a face of an actor and make them play whatever you want.
Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?
Certainly communication needs to be restricted when it’s being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don’t see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.
Copying an actors likeness without their consent can cause a lot of harm when it’s used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is needed to promote the creation of art. Even in a post-scarcity future, it could be argued that realistically copying an actors likeness risks confusing the public when those copies are shared without context, and is therefore harmful- though I’m less sure about that one.
There are cases where imitating an actor without their consent, even very realistically, can be clearly harmless, however. For example, obvious parody and accurate reconstructions of damaged media. I don’t think those violate any fundamental moral right of actors to prevent imitations. In the absence of real harm, I think the right of the public to communicate what they want to communicate should outweigh the desire of an actor control how they’re portrayed.
In your example of a “spam attack”, it seems to me one of two things would have to be true:
It could be that people lose interest in the original artist’s work because the imitations have already explored limits of the idea in a way they find valuable- in which case, I think this is basically equivalent to when an idea goes viral in the culture; the original artist deserves respect for having invented the idea, but shouldn’t have a right to prevent the culture from exploring it, even if that exploration is very fast.
Alternatively, it could be the case that the artist has more to say that isn’t or can’t be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do. In practice, I think this usually works fine- there’s already an enormous amount of imitation in the culture, but people who consistently create innovative work don’t often languish in obscurity.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, “right to be heard” implied “over another human”, and that is a very different competition.
I mean, I agree, but I think that’s a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that’s also good! More value is valuable.
And, of course, if the ASI isn’t well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
but I think that’s a question of alignment rather than a problem inherent to AI media
Disagree. Imagine you produced perfectly aligned ASI—it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let’s allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It’s all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god’s world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that “alignment” implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities—and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don’t think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions—the bad art, the disagreements, the rejection in love—are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
You are conflating two definitions of alignment, “notkilleveryoneism” and “ambitious CEV-style value alignment”. If you have only first type of alignment, you don’t use it to produce good art, you use it for something like “augment human intelligence so we can solve second type of alignment”. If your ASI is aligned in second sense, it is going to deduce that humans wouldn’t like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals.
My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual.
Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be “visible” and undermine the possibility of any kind of personality/individuality. And without those things we can’t have any culture or society expect a hivemind.
Are you OK with such disintegration of culture and society?
In general, I think people have a right to hear other people, but not a right to be heard.
To me that’s very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
Alternatively, it could be the case that the artist has more to say that isn’t or can’t be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do.
I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
To me that’s very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
I wouldn’t take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that freedom.
Speaking only for myself, I’m not actually that invested in leaving an individual mark on society- when I put effort into something I value, whether people recognize that I’ve done so is not often something I worry about, and the way people perceive me doesn’t usually have much to do with how I define myself. Most of the art I’ve created in my life I’ve never actually shared with anyone- not out of shame, but just because I’ve never gotten around to it.
I realize I’m pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I’ve described are necessary conditions?
I realize I’m pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
Relevant part of my argument was “if your personality gets limitlessly copied and modified, your personality doesn’t exist (in the cultural sense)”. You’re talking about something different, you’re talking about ambitions and desire of fame.
My thesis (to not lose the thread of the conversation):
If human culture and society are natural, then the rights about information are natural too, because culture/society can’t exist without them.
Yeah, I do get that—if the possibility exists and it’s just curtailed (e.g. you have some kind of protectionist law that says book covers or movie posters must be illustrated by humans even though AI can do it just as well), it feels like a bad joke anyway. The genie’s out of the bottle, personally I think to some extent it’s bad that we let it out at all, but we can’t put it back in anyway and it’s not even particularly realistic to imagine a world in which we dodged this specific application (after all it’s a pretty natural generalization of computer vision).
The copyright issue is separated—having copyright BUT letting corporations violate it to train AIs that then are used to generate images that can in turn be copyrighted would absolutely be the worst of both worlds. That said, even without copyright you still have an asymmetry because big companies have more resources for compute. We’re not going to see a post-scarcity utopia for sure if we don’t find a way to buck this centralization trend, and art is just one example of it.
However, about the fact that the “work of making art” can be easily automated, I think casting it as work at all is already missing the point. It’s made into economic useful work because it’s something that can be monetized, but at its core, art is a form of communication. Let’s put it this way—suppose you can make AIs (and robots) that make for better-than-human lovers. I mean in all respects, from sex to just being comforting and supporting when necessary. They don’t feel anything, they’re just very good at predicting and simulating the actions of an ideal partner. Would you say that is “automating away the work of being a good partner”, which thus should be automated away, since it would be pointless to try and do it worse than a machine would? Or does “the work” itself lose meaning once you know it’s just that, just work, and there is no intent behind it?
The thing you say, about art being freed from the constraints of commercialism, would be a consequence of having post-scarcity, not of having AI art generators. If you have AI-generated art but you still struggle to make ends meet you won’t be able to freely create art, you’ll just be busy doing some other much shittier job and then come home and enjoy your custom AI Netflix show to try and feel something for a couple hours. There is no fundamental right of people to have as much art as possible and as close to their tastes as they want, any more than there is to have the perfect lover that meets their needs to a T. To turn those things into products that we’re entitled to leads pretty much to losing our own humanity. It’s perfectly fine to say we should all have our material needs satisfied—food, housing, clothing—but when it comes to relationships with others (be it friendship, love, or the much less personal but still human rapport between an artist and an admirer of their art), I think we can’t stop doing the work ourselves without losing something crucial to our nature, and ultimately, losing our identity as a species.
Thus, it doesn’t matter in the least if it stifles human output, because the overwhelming majority of us who don’t rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.
How do you know that? Art is one of the biggest outlets of human potential; one of the biggest forces behind human culture and human communities; one of the biggest communication channels between people.
One doesn’t need to be a professional artist to care about all that.
Well, “to make a living” implies that you’re an artist as a profession and earn money from it. But I agree with you that that’s far from the only problem. Art is a two-way street and its economic value isn’t all there is to it. A world in which creating art feels pointless is one in which IMO we’re all significantly more miserable.
My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.
I strongly expect, and would be outright shocked if it were otherwise, that we won’t end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we’ve made such a great dent in visual art already with image models that lack the intelligence of an average human.
Thus, it doesn’t matter in the least if it stifles human output, because the overwhelming majority of us who don’t rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.
To put money where my mouth is, I write a web serial, after years of world-building and abortive sketches in my notes, I realized that the release of GPT-4 meant that any benefit from my significantly above average ability to be a human writer was in jeopardy, if not now, then a handful of advances down the line. So my own work is more of a “I told you I was a good writer, before anyone can plausibly claim my work was penned by an AI” for street cred rather than a replacement for my day job.
If GPT-5 can write as well as I can, and emulate my favorite authors, or even better yet, pen novel novels (pun intended), then my minor distress at losing potential Patreon money is more than ameliorated by the fact I have a nigh-infinite number of good books to read! I spend a great deal more time reading the works of others than writing myself.
The same is true for my day job, being a doctor, I would look forward to being made obsolete, if only I had sufficient savings or a government I could comfortably rely on to institute UBI.
I would much prefer that we tax the fruits of automation to support us all when we’re inevitably obsolete rather than extend copyright law indefinitely into the future, or subject derivative works made by AI to the same constraints. The solution is to prepare our economies to support a ~100% non-productive human populace indefinitely, better preparing now than when we have no choice but to do so or let them starve to death.
I’m also an artist. My job involves a mix of graphic design and web development, and I make some income on the side from a Patreon supporting my personal work- all of which could be automated in the near future by generative AI. And I also think that’s a good thing.
Copyright has always been a necessary evil. The atmosphere of fear and uncertainty it creates around remixes and reinterpretations has held back art- consider, for example, how much worse modern music would be without samples, a rare case where artists operating in a legal grey area with respect to copyright became so common that artists lost their fear. That fear still persists in almost every other medium, however, forcing artists to constantly reinvent the wheel rather than iterating on success. Copyright also creates a really enormous amount of artificial scarcity- limiting peoples’ access to art to a level far below what we have the technical capacity to provide. All because nobody can figure out a better way of funding artists than granting lots of little monopolies.
Once our work is automated and all but free, however, we’ll have the option of abolishing copyright altogether. That would free artists to create whatever we’d like; free self-expression from technical barriers; free artistic culture from the distorting and wasteful influence of zero-sum status competition. Art, I suspect, will get much, much better- and as someone who loves art, that means a lot to me.
And as terrible as this could be for my career, spending my life working in a job that could be automated but isn’t would be as soul-crushing as being paid to dig holes and fill them in again. It would be an insultingly transparent facsimile of useful work. An offer of UBI, but only if I spend eight hours a day performing a ritual imitation of meaningful effort. No. If society wants to pay me for the loss of my profession, I won’t refuse, but if I have to go into construction or whatever to pay the bills while I wait to find out whether this is all going to lead to post-scarcity utopia or apocalypse, then so be it.
Could you explain your attitudes towards art and art culture more in depth and explain how exactly your opinions on AI art follow from those attitudes? For example, how much do you enjoy making art and how conditional is that enjoyment? How much do you care about self-expression, in what way? I’m asking because this analogy jumped out at me as a little suspicious:
But creative work is not mechanical work, it can’t be automated that way, AI doesn’t replace you that way. AI doesn’t have the model of your brain, it can’t make the choices you would make. It replaces you by making something cheaper and on the same level of “quality”. It doesn’t automate your self-expression. If you care about self-expression, the possibility of AI doesn’t have to feel soul-crushing.
I apologize for sounding confrontational. You’re free to disagree with everything above. I just wanted to show that the question has a lot of potential nuances.
In that paragraph, I’m only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations. That kind of art isn’t about self-expression- it’s about communicating the client’s vision. Which is, admittedly, often a euphemism for “helping businesses win status signaling competitions”, but not always or entirely. Creating beautiful things and improving users’ experience is positive-sum, and something I take pride in.
Pretty soon, however, clients will be able to have the same sort of interactions with an AI that they have with me, and get better results. That means more of the positive-sum aspects of the work, with much less expenditure of resources- a very clear positive for society. If that’s prevented to preserve jobs like mine, then the jobs become a drain on society- no longer genuinely productive, and not something I could in good faith take pride in.
Artistic expression, of course, is something very different. I’m definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That’s not threatened at all by AI. In fact, I’ve really enjoyed mixing AI with traditional digital illustration recently. While I may go back to purely hand-drawn art for the challenge, AI in that context isn’t harming self-expression; it’s supporting it.
While it’s true that AI may threaten certain jobs that involve artistic self-expression (and probably my Patreon), I don’t think that’s actually going to result in less self-expression. As AI tools break down the technical barriers between imagination and final art piece, I think we’re going to see a lot more people expressing themselves through visual mediums.
Also, once AGI reaches and passes a human level, I’d be surprised if it wasn’t capable of some pretty profound and moving artistic self-expression in its own right. If it turns out that people are often more interested what minds like that have to say artistically than what other humans are creating, then so long as those AIs are reasonably well-aligned, I’m basically fine with that. Art has never really been about zero-sum competition.
Thank you for the answer, clarifies your opinion a lot!
I think there are some threats, at least hypothetical. For example, the “spam attack”. People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets “pruned” in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the “spam attack” is not really possible even if you have zero copyright laws.)
In general, I believe for culture to exist we need to respect the idea “there’s a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled” in some way. For example, maybe you shouldn’t use AI to “steal” a face of an actor and make them play whatever you want.
Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?
Certainly communication needs to be restricted when it’s being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don’t see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.
Copying an actors likeness without their consent can cause a lot of harm when it’s used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is needed to promote the creation of art. Even in a post-scarcity future, it could be argued that realistically copying an actors likeness risks confusing the public when those copies are shared without context, and is therefore harmful- though I’m less sure about that one.
There are cases where imitating an actor without their consent, even very realistically, can be clearly harmless, however. For example, obvious parody and accurate reconstructions of damaged media. I don’t think those violate any fundamental moral right of actors to prevent imitations. In the absence of real harm, I think the right of the public to communicate what they want to communicate should outweigh the desire of an actor control how they’re portrayed.
In your example of a “spam attack”, it seems to me one of two things would have to be true:
It could be that people lose interest in the original artist’s work because the imitations have already explored limits of the idea in a way they find valuable- in which case, I think this is basically equivalent to when an idea goes viral in the culture; the original artist deserves respect for having invented the idea, but shouldn’t have a right to prevent the culture from exploring it, even if that exploration is very fast.
Alternatively, it could be the case that the artist has more to say that isn’t or can’t be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do. In practice, I think this usually works fine- there’s already an enormous amount of imitation in the culture, but people who consistently create innovative work don’t often languish in obscurity.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, “right to be heard” implied “over another human”, and that is a very different competition.
I mean, I agree, but I think that’s a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that’s also good! More value is valuable.
And, of course, if the ASI isn’t well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
Disagree. Imagine you produced perfectly aligned ASI—it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let’s allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It’s all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god’s world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that “alignment” implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities—and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don’t think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions—the bad art, the disagreements, the rejection in love—are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
You are conflating two definitions of alignment, “notkilleveryoneism” and “ambitious CEV-style value alignment”. If you have only first type of alignment, you don’t use it to produce good art, you use it for something like “augment human intelligence so we can solve second type of alignment”. If your ASI is aligned in second sense, it is going to deduce that humans wouldn’t like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals. My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual.
Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be “visible” and undermine the possibility of any kind of personality/individuality. And without those things we can’t have any culture or society expect a hivemind.
Are you OK with such disintegration of culture and society?
To me that’s very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
I wouldn’t take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that freedom.
Speaking only for myself, I’m not actually that invested in leaving an individual mark on society- when I put effort into something I value, whether people recognize that I’ve done so is not often something I worry about, and the way people perceive me doesn’t usually have much to do with how I define myself. Most of the art I’ve created in my life I’ve never actually shared with anyone- not out of shame, but just because I’ve never gotten around to it.
I realize I’m pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I’ve described are necessary conditions?
Relevant part of my argument was “if your personality gets limitlessly copied and modified, your personality doesn’t exist (in the cultural sense)”. You’re talking about something different, you’re talking about ambitions and desire of fame.
My thesis (to not lose the thread of the conversation):
If human culture and society are natural, then the rights about information are natural too, because culture/society can’t exist without them.
Ominous supervillain voice: “For now.”
Yeah, I do get that—if the possibility exists and it’s just curtailed (e.g. you have some kind of protectionist law that says book covers or movie posters must be illustrated by humans even though AI can do it just as well), it feels like a bad joke anyway. The genie’s out of the bottle, personally I think to some extent it’s bad that we let it out at all, but we can’t put it back in anyway and it’s not even particularly realistic to imagine a world in which we dodged this specific application (after all it’s a pretty natural generalization of computer vision).
The copyright issue is separated—having copyright BUT letting corporations violate it to train AIs that then are used to generate images that can in turn be copyrighted would absolutely be the worst of both worlds. That said, even without copyright you still have an asymmetry because big companies have more resources for compute. We’re not going to see a post-scarcity utopia for sure if we don’t find a way to buck this centralization trend, and art is just one example of it.
However, about the fact that the “work of making art” can be easily automated, I think casting it as work at all is already missing the point. It’s made into economic useful work because it’s something that can be monetized, but at its core, art is a form of communication. Let’s put it this way—suppose you can make AIs (and robots) that make for better-than-human lovers. I mean in all respects, from sex to just being comforting and supporting when necessary. They don’t feel anything, they’re just very good at predicting and simulating the actions of an ideal partner. Would you say that is “automating away the work of being a good partner”, which thus should be automated away, since it would be pointless to try and do it worse than a machine would? Or does “the work” itself lose meaning once you know it’s just that, just work, and there is no intent behind it?
The thing you say, about art being freed from the constraints of commercialism, would be a consequence of having post-scarcity, not of having AI art generators. If you have AI-generated art but you still struggle to make ends meet you won’t be able to freely create art, you’ll just be busy doing some other much shittier job and then come home and enjoy your custom AI Netflix show to try and feel something for a couple hours. There is no fundamental right of people to have as much art as possible and as close to their tastes as they want, any more than there is to have the perfect lover that meets their needs to a T. To turn those things into products that we’re entitled to leads pretty much to losing our own humanity. It’s perfectly fine to say we should all have our material needs satisfied—food, housing, clothing—but when it comes to relationships with others (be it friendship, love, or the much less personal but still human rapport between an artist and an admirer of their art), I think we can’t stop doing the work ourselves without losing something crucial to our nature, and ultimately, losing our identity as a species.
How do you know that? Art is one of the biggest outlets of human potential; one of the biggest forces behind human culture and human communities; one of the biggest communication channels between people.
One doesn’t need to be a professional artist to care about all that.
Well, “to make a living” implies that you’re an artist as a profession and earn money from it. But I agree with you that that’s far from the only problem. Art is a two-way street and its economic value isn’t all there is to it. A world in which creating art feels pointless is one in which IMO we’re all significantly more miserable.