Ooh nice, thank you. I think this is also now my favorite AI fiction output.
It’s an interesting thing—it still has the issues I tend to have, but just less intensely enough that my hater reflex doesn’t trigger into hyperdrive. Like I find my emotional orientation is that of someone reading something by a student, and being like “oh, hey, nice, they might have potential.”
If this was written for a workshop I was in, and I felt I could be honest without quashing the author’s dreams, I think my feedback would be, like, “try less hard”. Like the whimsical magical realism constructions become a little relentless by the end of it. But that’s fixable. Harder to fix are what feel like non-sequiturs to me, but I find I often find things to be non sequiturs that other people don’t, so I may just be oversensitive there. (For instance, do small children and old cats get the same kind of pity? Is that a real thing? I notice it feels like a rhythmic deepity that doesn’t actually point to any actual emotion people feel.)
But! It ain’t terrible. Updates me slightly.
Yeah, I agree that I’m probably too attached to the attractor basin idea here. It seems like some sort of weighted combination between that and what you suggest, though I’d frame the “all over the place” as the chatbots not actually having enough of something (parameters? training data? oomph?) to capture the actual latent structure of very good short (or longer) fiction. It could be as simple as there being an awful lot of terrible poetry that doesn’t have the latent structure that great stuff has, online. If that’s a big part of the problem, we should solve it sooner than I’d otherwise expect.