One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don’t take it seriously, because both its possibility and its impossibility were presented as equally probable.
I had this thought recently when reading Robert Sawyer’s “Calculating God.” The premise was something along the lines of “what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?” His answer was “quite a bit”, but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.
I had this thought recently when reading Robert Sawyer’s “Calculating God.” The premise was something along the lines of “what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?” His answer was “quite a bit”, but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.