One of the things I track are “ingredients for a good movie or TV show that would actually be narratively satisfying / memetically fit,” that would convey good/realistic AI hard sci-fi to the masses.
One of the more promising strategies withint that I can think of is “show multiple timelines” or “flashbacks from a future where the AI wins but it goes slowly enough to be human-narrative-comprehensible” (with the flashbacks being about the people inventing the AI).
This feels like one of the reasonable options for a “future” narrative. (A previous one I was interested in was the Green goo is plausible concept)
Also, I think many Richard Ngo stories would lend themselves well to being some kind of cool youtube video, leveraging AI generated content to make things feel higher budget and also sending an accompanying message of “the future is coming, like now.” (King and the Golem was nice but felt more like a lecture than a video, or something). A problem with AI generated movies is that the tech’s not there yet for it not being slightly uncanny, but I think Ngo stories have a vibe where the uncanniness will be kinda fine.
you’ll lose an important audience segment the moment they recognize any AI generated anything. The people who wouldn’t be put off by AI generated stuff probably won’t be put off by the lack of it. you might be able to get away with it by using AI really unusually well such that it’s just objectively hard to even get a hunch that AI was involved other than by the topic.
I’m skeptical that there are actually enough people so ideologically opposed to this, that it outweighs the upside of driving home that capabilities are advancing, through the medium itself. (similar to how even though tons of people hate FB, few people actually leave)
I’d be wanting to target a quality level similar to this:
Perhaps multiple versions, then. I maintain my claim that you’re missing a significant segment of people who are avoiding AI manipulation moderately well but as a result not getting enough evidence about what the problem is.
both—I’d bet they’re between 5 to 12% of the population, and that they’re natural relays of the ideas you’d want to broadcast, if only they weren’t relaying such mode-collapsed versions of the points. A claim presented without deductive justification: in trying to make media that is very high impact, making something opinionated in the ways you need to is good, and making that same something unopinionated in ways you don’t need to is also good. Also, the video you linked has a lot of additional opinionated features that I think are targeting a much more specific group than even “people who aren’t put off by AI”—it would never show up on my youtube.
Also, the video you linked has a lot of additional opinionated features that I think are targeting a much more specific group than even “people who aren’t put off by AI”—it would never show up on my youtube.
For frame of reference, do regular movie trailers normally show up in your youtube? This video seemed relatively “mainstream”-vibing to me, although somewhat limited by the medium.
I don’t fully agree with gears, but I think it’s worth thinking about. If you’re talking about “proportion of people who sincerely think that way”, and if we’re in the context of outreach, I doubt that matters as much as “proportion of people who will see someone else point at you and make ‘eww another AI slop spewer’ noises, then decide out of self-preservation that they’d better not say anything positive about you or reveal that they’ve changed their mind about anything because of you”. Also, “creatives who feel threatened by role displacement or think generative AI is morally equivalent to super-plagiarism (whether or not this is due to inaccurate mental models of how it works)” seems like an interest group that might have disproportionate reach.
But I’m also not sure how far that pans out in importance-weighting. I expect my perception of the above to be pretty biased by bubble effects, but I also think we’ve (especially in the USA, but with a bunch of impact elsewhere due to American cultural-feed dominance) just gone through a period where an overarching memeplex that includes that kind of thing has had massive influence, and I expect that to have a long linger time even if the wave has somewhat crested by now.
On the whole I am pretty divided about whether actively skirting around the landmines there is a good idea or not, though my intuition suggests some kind of mixed strategy split between operators would be best.
One of the things I track are “ingredients for a good movie or TV show that would actually be narratively satisfying / memetically fit,” that would convey good/realistic AI hard sci-fi to the masses.
One of the more promising strategies withint that I can think of is “show multiple timelines” or “flashbacks from a future where the AI wins but it goes slowly enough to be human-narrative-comprehensible” (with the flashbacks being about the people inventing the AI).
This feels like one of the reasonable options for a “future” narrative. (A previous one I was interested in was the Green goo is plausible concept)
Also, I think many Richard Ngo stories would lend themselves well to being some kind of cool youtube video, leveraging AI generated content to make things feel higher budget and also sending an accompanying message of “the future is coming, like now.” (King and the Golem was nice but felt more like a lecture than a video, or something). A problem with AI generated movies is that the tech’s not there yet for it not being slightly uncanny, but I think Ngo stories have a vibe where the uncanniness will be kinda fine.
you’ll lose an important audience segment the moment they recognize any AI generated anything. The people who wouldn’t be put off by AI generated stuff probably won’t be put off by the lack of it. you might be able to get away with it by using AI really unusually well such that it’s just objectively hard to even get a hunch that AI was involved other than by the topic.
I’m skeptical that there are actually enough people so ideologically opposed to this, that it outweighs the upside of driving home that capabilities are advancing, through the medium itself. (similar to how even though tons of people hate FB, few people actually leave)
I’d be wanting to target a quality level similar to this:
Perhaps multiple versions, then. I maintain my claim that you’re missing a significant segment of people who are avoiding AI manipulation moderately well but as a result not getting enough evidence about what the problem is.
I would bet they are <1% of the population. Do you disagree, or think they disproportionately matter?
both—I’d bet they’re between 5 to 12% of the population, and that they’re natural relays of the ideas you’d want to broadcast, if only they weren’t relaying such mode-collapsed versions of the points. A claim presented without deductive justification: in trying to make media that is very high impact, making something opinionated in the ways you need to is good, and making that same something unopinionated in ways you don’t need to is also good. Also, the video you linked has a lot of additional opinionated features that I think are targeting a much more specific group than even “people who aren’t put off by AI”—it would never show up on my youtube.
For frame of reference, do regular movie trailers normally show up in your youtube? This video seemed relatively “mainstream”-vibing to me, although somewhat limited by the medium.
I don’t fully agree with gears, but I think it’s worth thinking about. If you’re talking about “proportion of people who sincerely think that way”, and if we’re in the context of outreach, I doubt that matters as much as “proportion of people who will see someone else point at you and make ‘eww another AI slop spewer’ noises, then decide out of self-preservation that they’d better not say anything positive about you or reveal that they’ve changed their mind about anything because of you”. Also, “creatives who feel threatened by role displacement or think generative AI is morally equivalent to super-plagiarism (whether or not this is due to inaccurate mental models of how it works)” seems like an interest group that might have disproportionate reach.
But I’m also not sure how far that pans out in importance-weighting. I expect my perception of the above to be pretty biased by bubble effects, but I also think we’ve (especially in the USA, but with a bunch of impact elsewhere due to American cultural-feed dominance) just gone through a period where an overarching memeplex that includes that kind of thing has had massive influence, and I expect that to have a long linger time even if the wave has somewhat crested by now.
On the whole I am pretty divided about whether actively skirting around the landmines there is a good idea or not, though my intuition suggests some kind of mixed strategy split between operators would be best.