Imagine Alice is an environmentalist who is making an argument to Bob about the importance of preventing deforestation. Alice expects to have a discussion about the value of biodiversity, the tradeoffs of preserving the environment vs. economic productivity, that sort of stuff.
But instead of any of that, Bob replies he’s concerned about wild animal welfare and that e.g. the Amazon Rainforest is a vast cesspit of animal suffering. Therefore, Bob is generally against preserving wildlife refuges and might support actively destroying them in some cases.
I think this experience is probably very disorienting to Alice. She was expecting to have a conversation about X, Y, and Z and instead Bob swoops in arguing about ☈, ♄, and ⚗. When I’ve been in the Alice role in similar sorts of conversations, I’ve felt things like:
Skepticism that Bob is stating his true reasons for his position
Annoyance that Bob is sidetracking the conversation instead of engaging with the core arguments
Disappointment that I didn’t get to make my case and see my argument (which I think is persuasive) land
I think all of these reactions are bad and unproductive (e.g. Bob isn’t sidetracking the conversation; the conversation just didn’t go according to my expectations). But they’re also extremely natural—I think it takes a lot of epistemic discipline to tamp down on these reactions, reorient to the conversation you’re actually having, and check whether you still stand by your old views.
---
I think proponents of open source, when they talk to AI safety folks, often find themselves in Alice’s position. They are expecting a discussion about the merits of openness, the risks of centralization, the harms of regulatory capture, etc. “But bioweapons” Bob responds. If Alice has never thought about this point before, it’ll probably feel like it came totally out of left field, and she’ll have reactions similar to the ones I described above (e.g. skepticism that Bob is stating his true reasons).
(And note that this might be hard for Bob to notice! For Bob, the “core argument” here has always been about bioweapons and other considerations around offense/defense balance for existential threats. He might be confused/annoyed that Alice wants to talk about the merits of openness.)
What should Bob do here? I’m not really sure, but one idea is: to the extent that Bob can honestly say he agrees with Alice on what Alice views as being the “core issues,” he should start the conversation out by making that clear. E.g. Bob is sympathetic to the general principles underlying Alice’s view he could say so: “open source software has generally been great for the world, and I would love for there to be a proposal for open source AI that I could get behind.” Once that agreement is established, he could then move on to explaining why he thinks there are other considerations “outside of the scope of Alice’s argument” which he feels are more compelling.
I think Bob should be even more direct about what’s happening. “I know most of the people who disagree with you on this are thinking of X, Y, and Z. My reasons are different. My opinions on X, Y and Z are largely similar to yours. But I’m concerned about ☈, ♄, and ⚗.” I think this approach would do even more than the idea in your last paragraph to make the surprise less jarring for Alice.
Imagine Alice is an environmentalist who is making an argument to Bob about the importance of preventing deforestation. Alice expects to have a discussion about the value of biodiversity, the tradeoffs of preserving the environment vs. economic productivity, that sort of stuff.
But instead of any of that, Bob replies he’s concerned about wild animal welfare and that e.g. the Amazon Rainforest is a vast cesspit of animal suffering. Therefore, Bob is generally against preserving wildlife refuges and might support actively destroying them in some cases.
I think this experience is probably very disorienting to Alice. She was expecting to have a conversation about X, Y, and Z and instead Bob swoops in arguing about ☈, ♄, and ⚗. When I’ve been in the Alice role in similar sorts of conversations, I’ve felt things like:
Skepticism that Bob is stating his true reasons for his position
Annoyance that Bob is sidetracking the conversation instead of engaging with the core arguments
Disappointment that I didn’t get to make my case and see my argument (which I think is persuasive) land
I think all of these reactions are bad and unproductive (e.g. Bob isn’t sidetracking the conversation; the conversation just didn’t go according to my expectations). But they’re also extremely natural—I think it takes a lot of epistemic discipline to tamp down on these reactions, reorient to the conversation you’re actually having, and check whether you still stand by your old views.
---
I think proponents of open source, when they talk to AI safety folks, often find themselves in Alice’s position. They are expecting a discussion about the merits of openness, the risks of centralization, the harms of regulatory capture, etc. “But bioweapons” Bob responds. If Alice has never thought about this point before, it’ll probably feel like it came totally out of left field, and she’ll have reactions similar to the ones I described above (e.g. skepticism that Bob is stating his true reasons).
(And note that this might be hard for Bob to notice! For Bob, the “core argument” here has always been about bioweapons and other considerations around offense/defense balance for existential threats. He might be confused/annoyed that Alice wants to talk about the merits of openness.)
What should Bob do here? I’m not really sure, but one idea is: to the extent that Bob can honestly say he agrees with Alice on what Alice views as being the “core issues,” he should start the conversation out by making that clear. E.g. Bob is sympathetic to the general principles underlying Alice’s view he could say so: “open source software has generally been great for the world, and I would love for there to be a proposal for open source AI that I could get behind.” Once that agreement is established, he could then move on to explaining why he thinks there are other considerations “outside of the scope of Alice’s argument” which he feels are more compelling.
I think Bob should be even more direct about what’s happening. “I know most of the people who disagree with you on this are thinking of X, Y, and Z. My reasons are different. My opinions on X, Y and Z are largely similar to yours. But I’m concerned about ☈, ♄, and ⚗.” I think this approach would do even more than the idea in your last paragraph to make the surprise less jarring for Alice.