I see it as an example of the kind of story where the author has a really cool idea, but forces a pointless conflict onto it so that there will be a plot.
I would have liked to see the story’s end without the second AI (augmented individual). However, I did like the story as it was. The issue I found with it was that their conflict of values was artificial. Human value is more complex than what was depicted (aesthetic hedonism(?) vs. utilitarianism), and unless the author had some thesis that such an augmented human would simplify their values, I would have enjoyed seeing them cooperate to a better end, for Earth and for the protagonist. Their goals did not conflict in any way (unless the protagonist was a paperclipper for intelligence), and they could have achieved a result that had greater value through cooperation, with a faster utopia for Reynolds and an isolated echo chamber for the protagonist, as well as a possible form of society of superintelligences.
I agree that the conflict was implausible, but then the magnitude and speed of growth of the main character’s intelligence was already magical enough that I’d already put the whole thing into the “stories that should be judged based on the aesthetic, not anything remotely resembling plausibility” category.
I quite like Chiang myself. There is a quality to a few authors like him, Mieville, and Egan that I can’t figure out what it is but really like. Possibly linguistics, good worldbuilding, and rarely having their characters be inexplicable idiots.
Oooh, I read this and...
Nf hfhny, V ybir ernqvat Puvnat. (Vagebqhprq gb uvz guebhtu “Uryy vf gur Nofrapr bs Tbq” juvpu vf rkpryyrag). Ohg rira gubhtu V jnf snfpvangrq ol guvf fgbel nf vg hasbyqrq, V sryg purngrq ol gur pyvznk. Vg jnf whfg fb sehfgengvat gb unir gur pbaarpgvba orgjrra gjb pyrire crbcyr (znavchyngvat gur znexrgf gb fraq n zrffntr! fdhrr!) or fb crggl naq fznyy. Creuncf gur cbvag jnf gung vagryyvtrapr nhtzragngvba vf begubtbany gb rguvpny nqinapr, ohg V jnfa’g pbaivaprq gung gurfr gjb crbcyr jrer fb hacyrnfnag gb ortva jvgu, fb gur jnfgr enaxyrq.
I see it as an example of the kind of story where the author has a really cool idea, but forces a pointless conflict onto it so that there will be a plot.
I would have liked to see the story’s end without the second AI (augmented individual). However, I did like the story as it was. The issue I found with it was that their conflict of values was artificial. Human value is more complex than what was depicted (aesthetic hedonism(?) vs. utilitarianism), and unless the author had some thesis that such an augmented human would simplify their values, I would have enjoyed seeing them cooperate to a better end, for Earth and for the protagonist. Their goals did not conflict in any way (unless the protagonist was a paperclipper for intelligence), and they could have achieved a result that had greater value through cooperation, with a faster utopia for Reynolds and an isolated echo chamber for the protagonist, as well as a possible form of society of superintelligences.
I agree that the conflict was implausible, but then the magnitude and speed of growth of the main character’s intelligence was already magical enough that I’d already put the whole thing into the “stories that should be judged based on the aesthetic, not anything remotely resembling plausibility” category.
I quite like Chiang myself. There is a quality to a few authors like him, Mieville, and Egan that I can’t figure out what it is but really like. Possibly linguistics, good worldbuilding, and rarely having their characters be inexplicable idiots.
Fictional evidence for the orthogonality thesis :)