I finished it, and felt much the same way. Like eutopia, I’d expect a frank conversation with a superintelligence to be deeply challenging; it would be very surprising if all the substantial advice one had for me turned out identical to what merely mortal soft transhumanists with environmental and social concerns are already saying. In fact, I wonder why the god of the story bothered; he’s clearly not telling us anything we don’t already know, and he isn’t saying it in a way that’ll substantially contribute to the meme’s spread.
It would be very surprising if all the substantial advice one had for me turned out identical to what merely mortal soft transhumanists with environmental and social concerns are already saying.
Obviously a human author cannot foresee what “substantial advice” a transhuman intelligence would actually offer; therefore the story’s god probably just functions as a mouthpiece for the author. I suggest you stop thinking about whether a superintelligent being would in fact say the things that character does, and start thinking about whether those things make sense on their own terms.
I know an authorial tract when I see one; but putting aside my basic distaste for the form and considering the short story as persuasive writing, I’d say it’s actually worse. It presents little reasoning, practically no evidence, and not even any especially interesting ideas. There’s nothing there but a sketch of a future history and a few generic environmental and social warnings, and I can get that without the smug overtones (well, without the same smug overtones, at least) on any left-leaning transhumanist blog.
I finished it, and felt much the same way. Like eutopia, I’d expect a frank conversation with a superintelligence to be deeply challenging; it would be very surprising if all the substantial advice one had for me turned out identical to what merely mortal soft transhumanists with environmental and social concerns are already saying. In fact, I wonder why the god of the story bothered; he’s clearly not telling us anything we don’t already know, and he isn’t saying it in a way that’ll substantially contribute to the meme’s spread.
I agree too. Above all, I wouldn’t expect a conversation with a superintelligence to be so boring.
Obviously a human author cannot foresee what “substantial advice” a transhuman intelligence would actually offer; therefore the story’s god probably just functions as a mouthpiece for the author. I suggest you stop thinking about whether a superintelligent being would in fact say the things that character does, and start thinking about whether those things make sense on their own terms.
You don’t say.
I know an authorial tract when I see one; but putting aside my basic distaste for the form and considering the short story as persuasive writing, I’d say it’s actually worse. It presents little reasoning, practically no evidence, and not even any especially interesting ideas. There’s nothing there but a sketch of a future history and a few generic environmental and social warnings, and I can get that without the smug overtones (well, without the same smug overtones, at least) on any left-leaning transhumanist blog.