A useful model for why it’s both appealing and difficult to say ‘Doomers and Realists are both against dangerous AI and for safety—let’s work together!’.
Yes agreed. Indeed one of the things that motivated me to propose this three-sided framework is watching discussions of the following form: 1. A & B both state that they believe that AI poses real risks that the public doesn’t understand.
2. A takes (what I now call) the “doomer” position that existential risk is serious and all other risks pale in comparison: “we are heading toward an iceberg and so it is pointless to talk about injustices on the ship re: third class vs first class passengers”
3. B takes (what I now call) the “realist” or “pragmatist position” that existential risk is, if not impossible, very remote and a distraction from more immediate concerns, e.g. use of AI to spread propaganda or to deny worthy people of loans or jobs: “all this talk of existential risk is science fiction and obscuring the REAL problems”
4. A and B then begin vigorously arguing with each other, each accusing the other of wasting time on unimportant issues.
My hypothesis/theory/argument is that at this point the general public throws up its hands because both the critics/experts can’t seem to agree on the basics.
By the way, I hope it’s clear that I’m not accusing A or B of doing anything wrong. I think they are both arguing in good faith from deeply held beliefs.
yes, this has been very much on my mind: if this three-sided framework is useful/valid, what does it mean for the possibility of the different groups cooperating?
I suspect that the depressing answer is that cooperation will be a big challenge and may not happen at all. Especially as to questions such as “is the European AI Act in its present form a good start or a dangerous waste of time?” It strikes me that each of the three groups in the framework will have very strong feelings on this question
realists: yes, because, even if it is not perfect, it is at least a start on addressing important issues like invasion of privacy.
boosters: no, because it will stifle innovation
doomers: no, because you are looking under the lamp post where the light is better, rather than addressing the main risk, which is existential risk.
A useful model for why it’s both appealing and difficult to say ‘Doomers and Realists are both against dangerous AI and for safety—let’s work together!’.
AI realism also risks a Security theater that obscures existential risks of AI.
Yes agreed. Indeed one of the things that motivated me to propose this three-sided framework is watching discussions of the following form:
1. A & B both state that they believe that AI poses real risks that the public doesn’t understand.
2. A takes (what I now call) the “doomer” position that existential risk is serious and all other risks pale in comparison: “we are heading toward an iceberg and so it is pointless to talk about injustices on the ship re: third class vs first class passengers”
3. B takes (what I now call) the “realist” or “pragmatist position” that existential risk is, if not impossible, very remote and a distraction from more immediate concerns, e.g. use of AI to spread propaganda or to deny worthy people of loans or jobs: “all this talk of existential risk is science fiction and obscuring the REAL problems”
4. A and B then begin vigorously arguing with each other, each accusing the other of wasting time on unimportant issues.
My hypothesis/theory/argument is that at this point the general public throws up its hands because both the critics/experts can’t seem to agree on the basics.
By the way, I hope it’s clear that I’m not accusing A or B of doing anything wrong. I think they are both arguing in good faith from deeply held beliefs.
yes, this has been very much on my mind: if this three-sided framework is useful/valid, what does it mean for the possibility of the different groups cooperating?
I suspect that the depressing answer is that cooperation will be a big challenge and may not happen at all. Especially as to questions such as “is the European AI Act in its present form a good start or a dangerous waste of time?” It strikes me that each of the three groups in the framework will have very strong feelings on this question
realists: yes, because, even if it is not perfect, it is at least a start on addressing important issues like invasion of privacy.
boosters: no, because it will stifle innovation
doomers: no, because you are looking under the lamp post where the light is better, rather than addressing the main risk, which is existential risk.