(Your response and arguments are good, so take the below in a friendly and non-dogmatic spirit)
Good enough for what?
Good enough for time-pressed people (and lazy and corrupt people, but they’re in a different category) to have a black-box system do things for them that they might, in the absence of the black-box system, have invested effort to do themselves, and as an intended or unintended result, increased their understanding, opening up new avenues of doing and understanding.
We’re still in the “wow, an AI made this” stage.
I’m pretty sure we’re currently exiting the “wow, an AI made this” stage, in the sense that ‘fakes’ in some domains are approaching undetectability.
We find that people don’t value AI art, and I don’t think that’s because of its unscarcity or whatever, I think it’s because it isn’t saying anything
I strongly agree, but it’s a slightly different point: that art is arguably not art if it was not made by an experiencing artist, whether flesh or artificial.
My worry about the evaporation of (human) understanding covers science (and every other iterably-improvable, abstractable domain) as well as art, and the impoverishment of creative search space that might result from abstraction-offloading will not be strongly affected by cultural attitudes toward proximately AI-created art — even less so when there’s no other kind left.
the machine needs to understand the needs of the world and the audience, and as soon as machines have that...
It’s probably not what you were implying, but I’d complete your sentence like so: “as soon as machines have that, we will have missed the slim chance we have now to protect human understanding from being fully or mostly offloaded.”
but where that tech is somehow not reliable and independent enough to be applied to ending the world...
I’m afraid I still can’t parse this, sorry! Are you referring to AI Doom, or the point at which A(G)I becomes omnipotent enough to be able to end the world, even if it doesn’t?
By the time we have to worry about preserving our understanding of the creative process against automation of it, we’ll be on the verge of receiving post-linguistic knowledge transfer technologies and everything else, quicker than the automation can wreak its atrophying effects.
I don’t have any more expertise or soothsaying power than you, so in the absence of priors to weight the options, I guess your opposing prediction is as likely as mine.
I’d just argue that the bad consequences of mine are bad enough to motivate us to try to stop it coming true, even if it’s not guaranteed to do so.
(Your response and arguments are good, so take the below in a friendly and non-dogmatic spirit)
Good enough for time-pressed people (and lazy and corrupt people, but they’re in a different category) to have a black-box system do things for them that they might, in the absence of the black-box system, have invested effort to do themselves, and as an intended or unintended result, increased their understanding, opening up new avenues of doing and understanding.
I’m pretty sure we’re currently exiting the “wow, an AI made this” stage, in the sense that ‘fakes’ in some domains are approaching undetectability.
I strongly agree, but it’s a slightly different point: that art is arguably not art if it was not made by an experiencing artist, whether flesh or artificial.
My worry about the evaporation of (human) understanding covers science (and every other iterably-improvable, abstractable domain) as well as art, and the impoverishment of creative search space that might result from abstraction-offloading will not be strongly affected by cultural attitudes toward proximately AI-created art — even less so when there’s no other kind left.
It’s probably not what you were implying, but I’d complete your sentence like so: “as soon as machines have that, we will have missed the slim chance we have now to protect human understanding from being fully or mostly offloaded.”
I’m afraid I still can’t parse this, sorry! Are you referring to AI Doom, or the point at which A(G)I becomes omnipotent enough to be able to end the world, even if it doesn’t?
I don’t have any more expertise or soothsaying power than you, so in the absence of priors to weight the options, I guess your opposing prediction is as likely as mine.
I’d just argue that the bad consequences of mine are bad enough to motivate us to try to stop it coming true, even if it’s not guaranteed to do so.