Robin Hanson: Well, even that is an interesting thing if people agree on it. You could say, “You know a lot of people who agree with you that AI risk is big and that we should deal with something soon. Do you know anybody who agrees with you for the same reasons?”
It’s interesting, so I did a poll, I’ve done some Twitter polls lately, and I did one on “Why democracy?” And I gave four different reasons why democracy is good. And I noticed that there was very little agreement, that is, relatively equal spread across these four reasons. And so, I mean that’s an interesting fact to know about any claim that many people agree on, whether they agree on it for the same reasons. And it would be interesting if you just asked people, “Whatever your reason is, what percentage of people interested in AI risk agree with your claim about it for the reason that you do?” Or, “Do you think your reason is unusual?”
Because if most everybody thinks their reason is unusual, then basically there isn’t something they can all share with the world to convince the world of it. There’s just the shared belief in this conclusion, based on very different reasons. And then it’s more on their authority of who they are and why they as a collective are people who should be listened to or something.
What should we think about the fact that there are so many arguments for the same conclusion? As a general rule, the more arguments support a statement, the more likely it is to be true. However, I’m inclined to believe that quality matters much more than quantity—it’s easy to make up weak arguments, but you only need one strong one to outweigh all of them. And this proliferation of arguments is (weak) evidence against their quality: if the conclusions of a field remain the same but the reasons given for holding those conclusions change, that’s a warning sign for motivated cognition (especially when those beliefs are considered socially important). This problem is exacerbated by a lack of clarity about which assumptions and conclusions are shared between arguments, and which aren’t.
(I don’t think those are the points Robin Hanson is making there, but they seem somewhat related.)
But I think another point should be acknowledged, which is that it seems at least possible that a wide range of people could actually “believe in” the exact same set of arguments, yet all differ in which argument they find most compelling. E.g., you can only vote for one option in a Twitter poll, so it might be that all of Hanson’s followers believed in all four reasons why democracy is good, but just differed in which one seemed strongest to them.
Likewise, the answer to “Whatever your reason is, what percentage of people interested in AI risk agree with your claim about it for the reason that you do?” could potentially be “very low” even if a large percentage of people interested in AI risk would broadly accept that reason, or give it pretty high credence, because it might not be “the reason” they agree with that claim about it.
It’s sort of like there might appear to be strong divergence of views in terms of the “frontrunner” argument, whereas approval voting would indicate that there’s some subset of arguments that are pretty widely accepted. And that subset as a collective may be more important to people than the specific frontrunner they find most individually compelling.
Indeed, AI risk seems to me like precisely the sort of topic where I’d expect it to potentially make sense for people to find a variety of somewhat related, somewhat distinct arguments somewhat compelling, and not be 100% convinced by any of them, but still see them as adding up to good reason to pay significant attention to the risks. (I think that expectation of mine is very roughly based on how hard it is to predict the impacts of a new technology, but us having fairly good reason to believe that AI will eventually have extremely big impacts.)
“Do you think your reason is unusual?” seems like it’d do a better job than the other question for revealing whether people really disagree strongly about the arguments for the views, rather than just about which particular argument seems strongest. But I’m still not certain it would do that. I think it’d be good to explicitly ask questions about what argument people find most compelling, and separately what arguments they see as substantially plausible and that inform their overall views at least somewhat.
From the transcript:
I think there’s something to this idea. It also reminds me of the principle that one should beware surprising and suspicious convergence, as well as of the following passage from Richard Ngo:
(I don’t think those are the points Robin Hanson is making there, but they seem somewhat related.)
But I think another point should be acknowledged, which is that it seems at least possible that a wide range of people could actually “believe in” the exact same set of arguments, yet all differ in which argument they find most compelling. E.g., you can only vote for one option in a Twitter poll, so it might be that all of Hanson’s followers believed in all four reasons why democracy is good, but just differed in which one seemed strongest to them.
Likewise, the answer to “Whatever your reason is, what percentage of people interested in AI risk agree with your claim about it for the reason that you do?” could potentially be “very low” even if a large percentage of people interested in AI risk would broadly accept that reason, or give it pretty high credence, because it might not be “the reason” they agree with that claim about it.
It’s sort of like there might appear to be strong divergence of views in terms of the “frontrunner” argument, whereas approval voting would indicate that there’s some subset of arguments that are pretty widely accepted. And that subset as a collective may be more important to people than the specific frontrunner they find most individually compelling.
Indeed, AI risk seems to me like precisely the sort of topic where I’d expect it to potentially make sense for people to find a variety of somewhat related, somewhat distinct arguments somewhat compelling, and not be 100% convinced by any of them, but still see them as adding up to good reason to pay significant attention to the risks. (I think that expectation of mine is very roughly based on how hard it is to predict the impacts of a new technology, but us having fairly good reason to believe that AI will eventually have extremely big impacts.)
“Do you think your reason is unusual?” seems like it’d do a better job than the other question for revealing whether people really disagree strongly about the arguments for the views, rather than just about which particular argument seems strongest. But I’m still not certain it would do that. I think it’d be good to explicitly ask questions about what argument people find most compelling, and separately what arguments they see as substantially plausible and that inform their overall views at least somewhat.