There is an awful lot of history. Preliminary to whether we imagine the past vividly enough for it to carry proper weight, we must select a cannon of ``important″ events to which we turn our attention.
In a recent thread on Reddit: http://reddit.com/info/2k77b/comments/c2k80o
I drew attention to Argentina because the story of Argentina’s 20th century economic disappointments jars uncomfortably with the cultural tradition in which I swim. I swim in a cultural stream in which the misfortunes which may befall a country live in a hierarchy. At the top are the bad misfortunes, losing wars, and fighting wars. Somewhere near the bottom are petty misfortunes: many countries are under the thumb of absolute rulers and if the caudillo retains power by pursuing popular policies then his rule is not so bad.
I know little of Argentinian history and understand it even less. What little I know threatens my hierarchy of misfortune. It looks as though well meaning but economically unsophisticated absolute rulers are the top misfortune. They are much worse than wars, which are intense, but brief.
I want to overcome my bias by learning about Argentinian history. I find myself struggling. There is a standard way of looking at recent history with Hitler, Stalin, Mao, Great War etc. I notice that I’m very dependent on social support and just get sucked into that looking at history from that point of view because it is the common one.
So there is a second sense in which History may or may not be available. Frist it is important to feel the force of history sufficiently strongly. But this could make things worse if we cultivate our feeling for a limited selection of history, chosen to support our standard narratives. The second requirement is for breadth, and this is very difficult if the people around you aren’t interested.
[rhetorical pose] We shouldn’t balance the risks and opportunities of AI. Enthusiasts for AI are biased. They under estimate the difficulties. They would not be so enthusiastic if they grasped how disappointing progress is likely to be. Detractors of AI are also biased. They under estimate the difficulties too. You will have a hard time convincing them of the difficulties, because you would be trying to pursuade them that they had been frightened of shadows.
So there are few opportunities which are likely to be altogether lost if we hang back through unnecessary fear. [/rhetorical]
Well, I happen to believe the two paragraphs above, but distinct from the question of whether I am right or not is the question of whether the phrase “We need to balance the risks and opportunities of AI.” means something or whether it is merely an applause light.
I think it is trivially true that we need to balance the actual risks and actual opportunities of AI. There is room for disagreement about whether we need to balance the perceived risks and perceived opportunities. If perceptions are accurate we should, but there is scope to say, for example, that the common perception is wrong and a rogue AI will in fact be quite stupid and easily unplugged. This opens the way to a decoding of language in which
o We need to balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities correctly and
o We shouldn’t balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.
One question that I dwell on is “how do intelligent and well-intention persons fall to quarrelling?”. The idea of an Applause Light is illuminating, but I think it is also quite tangled. There is the ambiguity between whether a phrase is an Applause Light or a Policy Proposal. I suspect that the core problem is that it is awfully tempting to exploit this ambiguity rhetorically, deliberating coding ones policy proposals in language that also functions as an Applause Light so that they come across as obviously correct.
The fun starts when one does this subconsciously and some-one else thinks it is deliberate and takes offence. Once this happens there is little chance of discovering the actual disaggreement (which might be about the accuracy of risk assessments) for the conversation will be derailed into meta-conversations about empty phrases and rhetoric.