I mean, I agree, but I think that’s a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that’s also good! More value is valuable.
And, of course, if the ASI isn’t well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
but I think that’s a question of alignment rather than a problem inherent to AI media
Disagree. Imagine you produced perfectly aligned ASI—it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let’s allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It’s all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god’s world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that “alignment” implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities—and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don’t think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions—the bad art, the disagreements, the rejection in love—are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
You are conflating two definitions of alignment, “notkilleveryoneism” and “ambitious CEV-style value alignment”. If you have only first type of alignment, you don’t use it to produce good art, you use it for something like “augment human intelligence so we can solve second type of alignment”. If your ASI is aligned in second sense, it is going to deduce that humans wouldn’t like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals.
My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
I mean, I agree, but I think that’s a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that’s also good! More value is valuable.
And, of course, if the ASI isn’t well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
Disagree. Imagine you produced perfectly aligned ASI—it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let’s allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It’s all good.
Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god’s world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that “alignment” implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities—and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption.
I don’t think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions—the bad art, the disagreements, the rejection in love—are an essential part inseparable from the existence of bonds tying us together as a species. Trying to sever only the bad parts results in severing the whole lot of it, and results in us remitting our agency to whatever is babying us. So, yeah, IMO humans have a right to be heard over machines, or rather, we should preserve that right if we care about staying in control of our own civilisation. Otherwise, we lose it not to exterminators but to caretakers. A softer twilight, but still a twilight.
You are conflating two definitions of alignment, “notkilleveryoneism” and “ambitious CEV-style value alignment”. If you have only first type of alignment, you don’t use it to produce good art, you use it for something like “augment human intelligence so we can solve second type of alignment”. If your ASI is aligned in second sense, it is going to deduce that humans wouldn’t like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals. My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).