Oh, interesting. Glad to hear your take on it. Although personally, I don’t actually think of it as being about
AI. I think of it as being about what a posthuman future looks like more generally, which is gonna have uploading and simulation and self-modification whether or not there’s AI involved.
Having said that, your comment does make me feel excited about writing something along these lines that’s not about the future, and is purely about agency/decision-theory/etc. I think the risk is landing in the uncanny valley where it’s not quite raising interesting questions about those topics, and it’s not quite emotionally engaging. E.g. I think the first half of the endings in both this and Ants and Grasshopper are kinda underwhelming by themselves.
Maybe the way to do it would be to flesh out in a bunch of detail what the selkie world is like, so that there’s some real emotional heft behind the decision that she’s making. I can imagine doing something like that with a different story, but I’m also not sure I’m skillful enough to properly succeed.
There’s a story on some blog (maybe Ozy’s, or something similar in concept-space) about an analogy between children and mind-controlling aliens. Can’t remember what it’s called, but would appreciate a link if anyone has one; it does a great job at raising interesting questions about identity and agency via packing an emotional punch.
fwiw, while the end of Ants and Grasshopper was really impactful to me, I did feel like the the first half was “worth the price of admission”. (Though yeah, this selkie story didn’t accomplish that for me). I can imagine an alt ending to the grasshopper one that focused on “okay, but, like, literally today right now, what I do with all these people who want resources from me that I can’t afford to give?”.
Yeah as I was writing it I realized “eh, okay it’s not exactly AI, it’s… transhumanism broadly?” but then I wasn’t actually sure what cluster I was referring to and figured AI was still a reasonable pointer.
I also did concretely wonder “man, how is he going to pack an emotional punch sticking to this agency/decision-theory theme?”. So, lol at that.
An idea fragment that just came to me is to showcase how the decision-theory applies to a lot of different situations, some of which are transhuman, but not in an escalating way, such that it feels like the whole point of the story. The transhuman angle gives it “ultimate stakes”, by virtue of making the numbers really big. And that was important to why the grasshopper story was so haunting to me. But, it doesn’t have to end on that note.
I guess it doesn’t accomplish the goal my original comment was getting at, but one solution here is for the last parable to be something like “the earliest human (or life form, if you can justify it for dogs or chimps or something) that ever faced this sort of dilemma.” And that gives it a kind of primal mythic Ur quality that has weight in part because of the transhumanist that descends from it, but centers it in something much more mundane and makes the mundanest version of it still feel important.
That feels like cheating though because it’s still drawing weight from the transhuman element. But is at least a different angle, and if the different vignettes aren’t in “order of ascending futurism” it could be more about the decisionmaking itself.
(The story “Uprooted” “Spinning Silver” is coming to mind here, btw, and might be worth reading for inspiration ((and because it’s just good on it’s own)) It’s a novel that’s essentially a retelling of “Rumpelstiltskin”, but about a Jewish moneylender who faces various choices of how to relate to other townspeople ((who are treating her badly, antisemiticly)), but has to adopt a kind of coldness to force them to actually enforce them paying her back, with escalating stakes).
Oh, interesting. Glad to hear your take on it. Although personally, I don’t actually think of it as being about
AI. I think of it as being about what a posthuman future looks like more generally, which is gonna have uploading and simulation and self-modification whether or not there’s AI involved.
Having said that, your comment does make me feel excited about writing something along these lines that’s not about the future, and is purely about agency/decision-theory/etc. I think the risk is landing in the uncanny valley where it’s not quite raising interesting questions about those topics, and it’s not quite emotionally engaging. E.g. I think the first half of the endings in both this and Ants and Grasshopper are kinda underwhelming by themselves.
Maybe the way to do it would be to flesh out in a bunch of detail what the selkie world is like, so that there’s some real emotional heft behind the decision that she’s making. I can imagine doing something like that with a different story, but I’m also not sure I’m skillful enough to properly succeed.
There’s a story on some blog (maybe Ozy’s, or something similar in concept-space) about an analogy between children and mind-controlling aliens. Can’t remember what it’s called, but would appreciate a link if anyone has one; it does a great job at raising interesting questions about identity and agency via packing an emotional punch.
fwiw, while the end of Ants and Grasshopper was really impactful to me, I did feel like the the first half was “worth the price of admission”. (Though yeah, this selkie story didn’t accomplish that for me). I can imagine an alt ending to the grasshopper one that focused on “okay, but, like, literally today right now, what I do with all these people who want resources from me that I can’t afford to give?”.
Yeah as I was writing it I realized “eh, okay it’s not exactly AI, it’s… transhumanism broadly?” but then I wasn’t actually sure what cluster I was referring to and figured AI was still a reasonable pointer.
I also did concretely wonder “man, how is he going to pack an emotional punch sticking to this agency/decision-theory theme?”. So, lol at that.
An idea fragment that just came to me is to showcase how the decision-theory applies to a lot of different situations, some of which are transhuman, but not in an escalating way, such that it feels like the whole point of the story. The transhuman angle gives it “ultimate stakes”, by virtue of making the numbers really big. And that was important to why the grasshopper story was so haunting to me. But, it doesn’t have to end on that note.
I guess it doesn’t accomplish the goal my original comment was getting at, but one solution here is for the last parable to be something like “the earliest human (or life form, if you can justify it for dogs or chimps or something) that ever faced this sort of dilemma.” And that gives it a kind of primal mythic Ur quality that has weight in part because of the transhumanist that descends from it, but centers it in something much more mundane and makes the mundanest version of it still feel important.
That feels like cheating though because it’s still drawing weight from the transhuman element. But is at least a different angle, and if the different vignettes aren’t in “order of ascending futurism” it could be more about the decisionmaking itself.
(The story
“Uprooted”“Spinning Silver” is coming to mind here, btw, and might be worth reading for inspiration ((and because it’s just good on it’s own)) It’s a novel that’s essentially a retelling of “Rumpelstiltskin”, but about a Jewish moneylender who faces various choices of how to relate to other townspeople ((who are treating her badly, antisemiticly)), but has to adopt a kind of coldness to force them to actually enforce them paying her back, with escalating stakes).lol at the spellchecker choking on “Rumpelstiltskin” and not offering any alternate suggestions.
(I think you’re thinking of Spinning Silver not Uprooted btw.)
Oh lol whoops.