One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don’t take it seriously, because both its possibility and its impossibility were presented as equally probable. The possibility of Strong AI is overwhelmingly more probable than the impossibility. People who currently don’t take Strong AI seriously will round off anything other than very strong evidence for the possibility of Strong AI to ‘evidence not decisive; continue default belief’, so their beliefs won’t change and they will now think they’ve mastered the arguments/investigated the issue and possibly be even less disposed to start taking Strong AI seriously (e.g. if they conclude that all the people who do take Strong AI seriously are biased, crazy, or delusional to have such high confidence, and distance themself from those people to avoid association).
A dispassionate survey or exploration of the evidence might well avoid this failure mode, in which case it is not a matter of doing active work to avoid it, but merely ensuring you don’t fall into the Always Present Both Sides Equally trap.
One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don’t take it seriously, because both its possibility and its impossibility were presented as equally probable.
I had this thought recently when reading Robert Sawyer’s “Calculating God.” The premise was something along the lines of “what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?” His answer was “quite a bit”, but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.
One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don’t take it seriously, because both its possibility and its impossibility were presented as equally probable. The possibility of Strong AI is overwhelmingly more probable than the impossibility. People who currently don’t take Strong AI seriously will round off anything other than very strong evidence for the possibility of Strong AI to ‘evidence not decisive; continue default belief’, so their beliefs won’t change and they will now think they’ve mastered the arguments/investigated the issue and possibly be even less disposed to start taking Strong AI seriously (e.g. if they conclude that all the people who do take Strong AI seriously are biased, crazy, or delusional to have such high confidence, and distance themself from those people to avoid association).
A dispassionate survey or exploration of the evidence might well avoid this failure mode, in which case it is not a matter of doing active work to avoid it, but merely ensuring you don’t fall into the Always Present Both Sides Equally trap.
I had this thought recently when reading Robert Sawyer’s “Calculating God.” The premise was something along the lines of “what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?” His answer was “quite a bit”, but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.