In that paragraph, I’m only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations. That kind of art isn’t about self-expression- it’s about communicating the client’s vision. Which is, admittedly, often a euphemism for “helping businesses win status signaling competitions”, but not always or entirely. Creating beautiful things and improving users’ experience is positive-sum, and something I take pride in.
Pretty soon, however, clients will be able to have the same sort of interactions with an AI that they have with me, and get better results. That means more of the positive-sum aspects of the work, with much less expenditure of resources- a very clear positive for society. If that’s prevented to preserve jobs like mine, then the jobs become a drain on society- no longer genuinely productive, and not something I could in good faith take pride in.
Artistic expression, of course, is something very different. I’m definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That’s not threatened at all by AI. In fact, I’ve really enjoyed mixing AI with traditional digital illustration recently. While I may go back to purely hand-drawn art for the challenge, AI in that context isn’t harming self-expression; it’s supporting it.
While it’s true that AI may threaten certain jobs that involve artistic self-expression (and probably my Patreon), I don’t think that’s actually going to result in less self-expression. As AI tools break down the technical barriers between imagination and final art piece, I think we’re going to see a lot more people expressing themselves through visual mediums.
Also, once AGI reaches and passes a human level, I’d be surprised if it wasn’t capable of some pretty profound and moving artistic self-expression in its own right. If it turns out that people are often more interested what minds like that have to say artistically than what other humans are creating, then so long as those AIs are reasonably well-aligned, I’m basically fine with that. Art has never really been about zero-sum competition.
Thank you for the answer, clarifies your opinion a lot!
Artistic expression, of course, is something very different. I’m definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That’s not threatened at all by AI.
I think there are some threats, at least hypothetical. For example, the “spam attack”. People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets “pruned” in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the “spam attack” is not really possible even if you have zero copyright laws.)
In general, I believe for culture to exist we need to respect the idea “there’s a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled” in some way. For example, maybe you shouldn’t use AI to “steal” a face of an actor and make them play whatever you want.
Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?
Certainly communication needs to be restricted when it’s being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don’t see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.
Copying an actors likeness without their consent can cause a lot of harm when it’s used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is needed to promote the creation of art. Even in a post-scarcity future, it could be argued that realistically copying an actors likeness risks confusing the public when those copies are shared without context, and is therefore harmful- though I’m less sure about that one.
There are cases where imitating an actor without their consent, even very realistically, can be clearly harmless, however. For example, obvious parody and accurate reconstructions of damaged media. I don’t think those violate any fundamental moral right of actors to prevent imitations. In the absence of real harm, I think the right of the public to communicate what they want to communicate should outweigh the desire of an actor control how they’re portrayed.
In your example of a “spam attack”, it seems to me one of two things would have to be true:
It could be that people lose interest in the original artist’s work because the imitations have already explored limits of the idea in a way they find valuable- in which case, I think this is basically equivalent to when an idea goes viral in the culture; the original artist deserves respect for having invented the idea, but shouldn’t have a right to prevent the culture from exploring it, even if that exploration is very fast.
Alternatively, it could be the case that the artist has more to say that isn’t or can’t be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do. In practice, I think this usually works fine- there’s already an enormous amount of imitation in the culture, but people who consistently create innovative work don’t often languish in obscurity.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, “right to be heard” implied “over another human”, and that is a very different competition.
I mean, I agree, but I think that’s a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that’s also good! More value is valuable.
And, of course, if the ASI isn’t well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
but I think that’s a question of alignment rather than a problem inherent to AI media
Disagree. Imagine you produced perfectly aligned ASI—it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let’s allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It’s all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god’s world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that “alignment” implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities—and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don’t think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions—the bad art, the disagreements, the rejection in love—are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
You are conflating two definitions of alignment, “notkilleveryoneism” and “ambitious CEV-style value alignment”. If you have only first type of alignment, you don’t use it to produce good art, you use it for something like “augment human intelligence so we can solve second type of alignment”. If your ASI is aligned in second sense, it is going to deduce that humans wouldn’t like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals.
My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual.
Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be “visible” and undermine the possibility of any kind of personality/individuality. And without those things we can’t have any culture or society expect a hivemind.
Are you OK with such disintegration of culture and society?
In general, I think people have a right to hear other people, but not a right to be heard.
To me that’s very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
Alternatively, it could be the case that the artist has more to say that isn’t or can’t be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do.
I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
To me that’s very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
I wouldn’t take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that freedom.
Speaking only for myself, I’m not actually that invested in leaving an individual mark on society- when I put effort into something I value, whether people recognize that I’ve done so is not often something I worry about, and the way people perceive me doesn’t usually have much to do with how I define myself. Most of the art I’ve created in my life I’ve never actually shared with anyone- not out of shame, but just because I’ve never gotten around to it.
I realize I’m pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I’ve described are necessary conditions?
I realize I’m pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
Relevant part of my argument was “if your personality gets limitlessly copied and modified, your personality doesn’t exist (in the cultural sense)”. You’re talking about something different, you’re talking about ambitions and desire of fame.
My thesis (to not lose the thread of the conversation):
If human culture and society are natural, then the rights about information are natural too, because culture/society can’t exist without them.
In that paragraph, I’m only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations. That kind of art isn’t about self-expression- it’s about communicating the client’s vision. Which is, admittedly, often a euphemism for “helping businesses win status signaling competitions”, but not always or entirely. Creating beautiful things and improving users’ experience is positive-sum, and something I take pride in.
Pretty soon, however, clients will be able to have the same sort of interactions with an AI that they have with me, and get better results. That means more of the positive-sum aspects of the work, with much less expenditure of resources- a very clear positive for society. If that’s prevented to preserve jobs like mine, then the jobs become a drain on society- no longer genuinely productive, and not something I could in good faith take pride in.
Artistic expression, of course, is something very different. I’m definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That’s not threatened at all by AI. In fact, I’ve really enjoyed mixing AI with traditional digital illustration recently. While I may go back to purely hand-drawn art for the challenge, AI in that context isn’t harming self-expression; it’s supporting it.
While it’s true that AI may threaten certain jobs that involve artistic self-expression (and probably my Patreon), I don’t think that’s actually going to result in less self-expression. As AI tools break down the technical barriers between imagination and final art piece, I think we’re going to see a lot more people expressing themselves through visual mediums.
Also, once AGI reaches and passes a human level, I’d be surprised if it wasn’t capable of some pretty profound and moving artistic self-expression in its own right. If it turns out that people are often more interested what minds like that have to say artistically than what other humans are creating, then so long as those AIs are reasonably well-aligned, I’m basically fine with that. Art has never really been about zero-sum competition.
Thank you for the answer, clarifies your opinion a lot!
I think there are some threats, at least hypothetical. For example, the “spam attack”. People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets “pruned” in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the “spam attack” is not really possible even if you have zero copyright laws.)
In general, I believe for culture to exist we need to respect the idea “there’s a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled” in some way. For example, maybe you shouldn’t use AI to “steal” a face of an actor and make them play whatever you want.
Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?
Certainly communication needs to be restricted when it’s being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don’t see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.
Copying an actors likeness without their consent can cause a lot of harm when it’s used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is needed to promote the creation of art. Even in a post-scarcity future, it could be argued that realistically copying an actors likeness risks confusing the public when those copies are shared without context, and is therefore harmful- though I’m less sure about that one.
There are cases where imitating an actor without their consent, even very realistically, can be clearly harmless, however. For example, obvious parody and accurate reconstructions of damaged media. I don’t think those violate any fundamental moral right of actors to prevent imitations. In the absence of real harm, I think the right of the public to communicate what they want to communicate should outweigh the desire of an actor control how they’re portrayed.
In your example of a “spam attack”, it seems to me one of two things would have to be true:
It could be that people lose interest in the original artist’s work because the imitations have already explored limits of the idea in a way they find valuable- in which case, I think this is basically equivalent to when an idea goes viral in the culture; the original artist deserves respect for having invented the idea, but shouldn’t have a right to prevent the culture from exploring it, even if that exploration is very fast.
Alternatively, it could be the case that the artist has more to say that isn’t or can’t be expressed by the imitations- other ideas, interesting self expression, and so on- but the imitations prevent people from finding that new work. I think that case is a failure of whatever means people are using to filter and find art. A good social media algorithm or friend group who recommend content to each other should recognize that the inventor of an good idea might invent other good ideas in the future, and should keep an eye out for and platform those ideas if they do. In practice, I think this usually works fine- there’s already an enormous amount of imitation in the culture, but people who consistently create innovative work don’t often languish in obscurity.
In general, I think people have a right to hear other people, but not a right to be heard. When protestors shout down a speech or spam bots make it harder to find information, the relevant right being violated is the former, not the latter.
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, “right to be heard” implied “over another human”, and that is a very different competition.
I mean, I agree, but I think that’s a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that’s also good! More value is valuable.
And, of course, if the ASI isn’t well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
Disagree. Imagine you produced perfectly aligned ASI—it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let’s allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It’s all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god’s world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that “alignment” implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities—and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don’t think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions—the bad art, the disagreements, the rejection in love—are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
You are conflating two definitions of alignment, “notkilleveryoneism” and “ambitious CEV-style value alignment”. If you have only first type of alignment, you don’t use it to produce good art, you use it for something like “augment human intelligence so we can solve second type of alignment”. If your ASI is aligned in second sense, it is going to deduce that humans wouldn’t like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals. My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual.
Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be “visible” and undermine the possibility of any kind of personality/individuality. And without those things we can’t have any culture or society expect a hivemind.
Are you OK with such disintegration of culture and society?
To me that’s very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
I wouldn’t take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that freedom.
Speaking only for myself, I’m not actually that invested in leaving an individual mark on society- when I put effort into something I value, whether people recognize that I’ve done so is not often something I worry about, and the way people perceive me doesn’t usually have much to do with how I define myself. Most of the art I’ve created in my life I’ve never actually shared with anyone- not out of shame, but just because I’ve never gotten around to it.
I realize I’m pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I’ve described are necessary conditions?
Relevant part of my argument was “if your personality gets limitlessly copied and modified, your personality doesn’t exist (in the cultural sense)”. You’re talking about something different, you’re talking about ambitions and desire of fame.
My thesis (to not lose the thread of the conversation):
If human culture and society are natural, then the rights about information are natural too, because culture/society can’t exist without them.