even existing GenAI can make good-enough content that would otherwise have required nontrivial amounts of human cognitive effort
This doesn’t seem to be true to me. Good enough for what? We’re still in the “wow, an AI made this” stage. We find that people don’t value AI art, and I don’t think that’s because of its unscarcity or whatever, I think it’s because it isn’t saying anything. It either needs to be very tightly controlled by an AI-using human artist, or the machine needs to understand the needs of the world and the audience, and as soon as machines have that...
Ending the world? Where does that come in?
All communications assume that the point they’re making is important and worth reading in some way (cooperative maxim of quantity). I’m contending that that assumption isn’t true in light of what seems likely to actually happen immediately or shortly after the point starts to become applicable to the technology, and I have explained why, but I might be able to understand if it’s still confusing, because:
The space of ‘anything we can imagine’ will shrink as our endogenous understanding of concepts shrinks. It will never not be ‘our problem’
is true, but that doesn’t mean we need to worry about this today. By the time we have to worry about preserving our understanding of the creative process against automation of it, we’ll be on the verge of receiving post-linguistic knowledge transfer technologies and everything else, quicker than the automation can wreak its atrophying effects. Eventually it’ll be a problem that we each have to tackle, but we’ll have a new kind of support, paradoxically, learning the solutions to the problem will not be our problem.
(Your response and arguments are good, so take the below in a friendly and non-dogmatic spirit)
Good enough for what?
Good enough for time-pressed people (and lazy and corrupt people, but they’re in a different category) to have a black-box system do things for them that they might, in the absence of the black-box system, have invested effort to do themselves, and as an intended or unintended result, increased their understanding, opening up new avenues of doing and understanding.
We’re still in the “wow, an AI made this” stage.
I’m pretty sure we’re currently exiting the “wow, an AI made this” stage, in the sense that ‘fakes’ in some domains are approaching undetectability.
We find that people don’t value AI art, and I don’t think that’s because of its unscarcity or whatever, I think it’s because it isn’t saying anything
I strongly agree, but it’s a slightly different point: that art is arguably not art if it was not made by an experiencing artist, whether flesh or artificial.
My worry about the evaporation of (human) understanding covers science (and every other iterably-improvable, abstractable domain) as well as art, and the impoverishment of creative search space that might result from abstraction-offloading will not be strongly affected by cultural attitudes toward proximately AI-created art — even less so when there’s no other kind left.
the machine needs to understand the needs of the world and the audience, and as soon as machines have that...
It’s probably not what you were implying, but I’d complete your sentence like so: “as soon as machines have that, we will have missed the slim chance we have now to protect human understanding from being fully or mostly offloaded.”
but where that tech is somehow not reliable and independent enough to be applied to ending the world...
I’m afraid I still can’t parse this, sorry! Are you referring to AI Doom, or the point at which A(G)I becomes omnipotent enough to be able to end the world, even if it doesn’t?
By the time we have to worry about preserving our understanding of the creative process against automation of it, we’ll be on the verge of receiving post-linguistic knowledge transfer technologies and everything else, quicker than the automation can wreak its atrophying effects.
I don’t have any more expertise or soothsaying power than you, so in the absence of priors to weight the options, I guess your opposing prediction is as likely as mine.
I’d just argue that the bad consequences of mine are bad enough to motivate us to try to stop it coming true, even if it’s not guaranteed to do so.
This doesn’t seem to be true to me. Good enough for what? We’re still in the “wow, an AI made this” stage. We find that people don’t value AI art, and I don’t think that’s because of its unscarcity or whatever, I think it’s because it isn’t saying anything. It either needs to be very tightly controlled by an AI-using human artist, or the machine needs to understand the needs of the world and the audience, and as soon as machines have that...
All communications assume that the point they’re making is important and worth reading in some way (cooperative maxim of quantity). I’m contending that that assumption isn’t true in light of what seems likely to actually happen immediately or shortly after the point starts to become applicable to the technology, and I have explained why, but I might be able to understand if it’s still confusing, because:
is true, but that doesn’t mean we need to worry about this today. By the time we have to worry about preserving our understanding of the creative process against automation of it, we’ll be on the verge of receiving post-linguistic knowledge transfer technologies and everything else, quicker than the automation can wreak its atrophying effects. Eventually it’ll be a problem that we each have to tackle, but we’ll have a new kind of support, paradoxically, learning the solutions to the problem will not be our problem.
(Your response and arguments are good, so take the below in a friendly and non-dogmatic spirit)
Good enough for time-pressed people (and lazy and corrupt people, but they’re in a different category) to have a black-box system do things for them that they might, in the absence of the black-box system, have invested effort to do themselves, and as an intended or unintended result, increased their understanding, opening up new avenues of doing and understanding.
I’m pretty sure we’re currently exiting the “wow, an AI made this” stage, in the sense that ‘fakes’ in some domains are approaching undetectability.
I strongly agree, but it’s a slightly different point: that art is arguably not art if it was not made by an experiencing artist, whether flesh or artificial.
My worry about the evaporation of (human) understanding covers science (and every other iterably-improvable, abstractable domain) as well as art, and the impoverishment of creative search space that might result from abstraction-offloading will not be strongly affected by cultural attitudes toward proximately AI-created art — even less so when there’s no other kind left.
It’s probably not what you were implying, but I’d complete your sentence like so: “as soon as machines have that, we will have missed the slim chance we have now to protect human understanding from being fully or mostly offloaded.”
I’m afraid I still can’t parse this, sorry! Are you referring to AI Doom, or the point at which A(G)I becomes omnipotent enough to be able to end the world, even if it doesn’t?
I don’t have any more expertise or soothsaying power than you, so in the absence of priors to weight the options, I guess your opposing prediction is as likely as mine.
I’d just argue that the bad consequences of mine are bad enough to motivate us to try to stop it coming true, even if it’s not guaranteed to do so.