Thanks for the comment; I appreciate the response! One thing: I would say generally that people should avoid assuming other motives/ bad epistemics (i.e. motivated reasoming) unless pretty obvious (which I don’t think is the case here) and can be resolved by pointing it out. This usually doesn’t help anyone any parties get any closer to the truth, and if anything, it creates bad faith among people leading them to actually have other motives (which is bad, I think).
I also would be interested in what you think of my response to the argument that the commenter made.
Excellent point about the potential to create polarization by accusing one side of motivated reasoning.
This is tricky, though, because it’s also really distorting our epistemics if we think that’s what’s going on, but won’t say it publicly. My years of research on cognitive biases have made me think that motivated reasoning is ubiquitous, a large effect, and largely unrecognized.
One approach is to always mention motivated reasoning from the other side. Those of us most involved in the x-risk discussion have plenty of emotional reasons to believe in doom. These include in-group loyalty and desire to be proven right once we’ve staked our a position. And laypeople who just fear change may be motivated to see ai as a risk without any real reasoning.
But most doomers are also technophiles and AI enthusiasts. I try to monitor my biases, and I can feel a huge temptation to overrate arguments for safety and finally be able to say “let’s build it!” We tend believe that successful AGi would pretty rapidly usher in a better world, including potentially saving us and our loved ones from the pain and horror of disease and involuntary death. Yet we argue against building it.
It seems like motivated reasoning pushes harder on average in one direction on this issue.
Yet you’re right that accusing people of motivated reasoning sounds like a hostile act if one doesn’t take it as likely that everyone has motivated reasoning. And the possible polarization that could cause would be a real extra problem. Avoiding it might be worth distorting our public epistemics a bit.
An alternative is to say that these are complex issues, and reasoning about likely future events with no real reference class is very difficult, so the arguments themselves must be evaluated very carefully. When they are, arguments for pretty severe risk seem to come out on top.
To be fair, I think that Knightian uncertainty plays a large role here; I think very high p(doom) estimates are just as implausible as very low ones.
First let me say that I really appreciate your attitude toward this, and your efforts to fairly weigh the arguments on both sides, and help others do the same. That’s why I’m spending this much time responding in detail; I want to help with that project.
I looked at your response to that argument. I think we’re coming from a pretty different perspective on how to evaluate arguments. My attitude is uncommon in general, but I think more or less the default on LessWrong: I think debates are a bad way to reach the truth. They use arguments as soldiers. I think arguments about complex topics in science, including AI alignment/risk, need to be evaluated in-depth, and this is best done cooperatively, with everyone involved making a real effort to reach the truth, and therefore real effort to change their mind when appropriate (since it’s emotionally hard to do that).
It sounds to me like you’re looking at arguments more the way they work in a debate or a Twitter discussion. If we are going to make our decisions that way as a society, we’ve got a big problem. I’m afraid we will and do. But that’s no reason to make that problem worse by evaluating the truth that way when we’ve got time and goodwill. And no reason to perpetuate that mode of reasoning by publishing lists of arguments in the absence of commentary on whether they’re good arguments in your relatively expert opinion. If people publish lists of reaons for x-risk that include bad arguments, I really wish they wouldn’t, for this reaon.
To your specific points:
The point of this post, as stated at the beginning, is not to produce novel arguments that lack response. Rather, the point is to consolidate the AI optimist side’s points as has been done much more (in my opinion) on the AI worried side.
Fair enough, and a worthy project. But you’ve got to draw the line somewhere on an argument’s validity. Or better yet, assign each a credence. For instance, your “intuition says this is weird” is technically a valid argument, but it should add only a tiny amount of credence; this is a unique event in history, so we’d expect intuition to be extraordinarily bad at it.
Each argument, even with a solid rebuttal, should still change people’s probabilities. Nobody has 100% probability in their counter arguments being true, and I actually think people typically overweight the ability to ignore an argument because they found a satisfying counter argument.
Technically true, but this is not, as the internet seems to think, the best kind of true. As above, assigning a credence is the right (although time-consuming) way to deal with this. If you don’t, you’re effectively setting a threshold for inclusion, and an argument that is 99% invalidated (like LeCun’s “just don’t make them seek dominance” argument) probably shouldn’t be included.
Arguments going back and forth can be done for days on every topic, and one can go up so many levels. An argument can be made against almost any claim. As stated, I think the main things to be worried about are initial arguments and probably second responses in most cases. The back and forth on a single point really do very little in my opinion.
As I said in the opener, I think that’s how arguments work in debates or on Twitter, but it’s not at all true of good discussions. With cooperative participants, it may take a good deal of discussion to identify cruxes and clarify the topic in ways that work toward agreement and better estimates of the truth. Presenting the arguments to relative novices similarly requires that depth before they’re going to understand what the argument really says, what assumptions it depends on, and therefore how valid it is to them.
Thanks! Honestly, I think this kind of project needs to get much more appreciation and should be done more by those who are very confident in their positions and would like to steelman the other side. I also often hear people very confident about their beliefs and truly have no idea what the bets counterarguments are—maybe this uncommon, but I went to an in-person rationalist meetup like last week, and the people were really confident but haven’t heard of a bunch of these counterarguments, which I though is not at all in the LessWrong spirit. That interaction was one of my inspirations for the post.
I think I agree, but I’m having a bit of trouble understanding how you would evaluate arguments so much differently than I am now. I would say my method is pretty different than that of twitter debates (in many ways, I am very sympathetic and influenced by the LessWrong approach). I think I could have made a list of cruxes of each argument, but I didn’t want the post to be too long—much fewer would read it which is why I recommended that people first get a grasp on the general arguments for AI being an existential risk right at the beginning (adding a credence or range, i think, is pretty silly given that people should be able to assign their own, and I’m just some random undergrad on the internet).
Yep—I totally agree. I don’t personally take the argument super seriously (though I attempted to steelman what that argument as I think other people take it very seriously). I was initially going to respond to every argument, but I didn’t want to make a 40+ minute post. I also did qualify that claim a bunch (as I did with others like the intractability argument)
Fair point. I do think the LeCun argument misunderstands a bunch about different aspects of the debate, but he’s probably smarter than me.
I think I’m gonna have to just disagree here. While I defintelely think finding cruxes are extremely important (and this sometimes requires much back and forth), there is a certain type of way arguments can go back and forth that I tend to think has little (and should have little) influence on beliefs—I’m open to being wrong, though!
Different but related point:
I think, generally, I largely agree with you on many things you’ve said and just appreciate the outside view more. A modest epistemology of sorts. Even if I don’t find an argument super compelling, if a bunch of people that I think are pretty smart do (Yann LeCun has done some groundbreaking work in AI stuff, so that seems like a reason to take him seriously), I’m still gonna write about it. This is another reason why I didn’t put credences on these arguments—let the people decide!
Thanks for the comment; I appreciate the response! One thing: I would say generally that people should avoid assuming other motives/ bad epistemics (i.e. motivated reasoming) unless pretty obvious (which I don’t think is the case here) and can be resolved by pointing it out. This usually doesn’t help anyone any parties get any closer to the truth, and if anything, it creates bad faith among people leading them to actually have other motives (which is bad, I think).
I also would be interested in what you think of my response to the argument that the commenter made.
Excellent point about the potential to create polarization by accusing one side of motivated reasoning.
This is tricky, though, because it’s also really distorting our epistemics if we think that’s what’s going on, but won’t say it publicly. My years of research on cognitive biases have made me think that motivated reasoning is ubiquitous, a large effect, and largely unrecognized.
One approach is to always mention motivated reasoning from the other side. Those of us most involved in the x-risk discussion have plenty of emotional reasons to believe in doom. These include in-group loyalty and desire to be proven right once we’ve staked our a position. And laypeople who just fear change may be motivated to see ai as a risk without any real reasoning.
But most doomers are also technophiles and AI enthusiasts. I try to monitor my biases, and I can feel a huge temptation to overrate arguments for safety and finally be able to say “let’s build it!” We tend believe that successful AGi would pretty rapidly usher in a better world, including potentially saving us and our loved ones from the pain and horror of disease and involuntary death. Yet we argue against building it.
It seems like motivated reasoning pushes harder on average in one direction on this issue.
Yet you’re right that accusing people of motivated reasoning sounds like a hostile act if one doesn’t take it as likely that everyone has motivated reasoning. And the possible polarization that could cause would be a real extra problem. Avoiding it might be worth distorting our public epistemics a bit.
An alternative is to say that these are complex issues, and reasoning about likely future events with no real reference class is very difficult, so the arguments themselves must be evaluated very carefully. When they are, arguments for pretty severe risk seem to come out on top.
To be fair, I think that Knightian uncertainty plays a large role here; I think very high p(doom) estimates are just as implausible as very low ones.
First let me say that I really appreciate your attitude toward this, and your efforts to fairly weigh the arguments on both sides, and help others do the same. That’s why I’m spending this much time responding in detail; I want to help with that project.
I looked at your response to that argument. I think we’re coming from a pretty different perspective on how to evaluate arguments. My attitude is uncommon in general, but I think more or less the default on LessWrong: I think debates are a bad way to reach the truth. They use arguments as soldiers. I think arguments about complex topics in science, including AI alignment/risk, need to be evaluated in-depth, and this is best done cooperatively, with everyone involved making a real effort to reach the truth, and therefore real effort to change their mind when appropriate (since it’s emotionally hard to do that).
It sounds to me like you’re looking at arguments more the way they work in a debate or a Twitter discussion. If we are going to make our decisions that way as a society, we’ve got a big problem. I’m afraid we will and do. But that’s no reason to make that problem worse by evaluating the truth that way when we’ve got time and goodwill. And no reason to perpetuate that mode of reasoning by publishing lists of arguments in the absence of commentary on whether they’re good arguments in your relatively expert opinion. If people publish lists of reaons for x-risk that include bad arguments, I really wish they wouldn’t, for this reaon.
To your specific points:
Fair enough, and a worthy project. But you’ve got to draw the line somewhere on an argument’s validity. Or better yet, assign each a credence. For instance, your “intuition says this is weird” is technically a valid argument, but it should add only a tiny amount of credence; this is a unique event in history, so we’d expect intuition to be extraordinarily bad at it.
Technically true, but this is not, as the internet seems to think, the best kind of true. As above, assigning a credence is the right (although time-consuming) way to deal with this. If you don’t, you’re effectively setting a threshold for inclusion, and an argument that is 99% invalidated (like LeCun’s “just don’t make them seek dominance” argument) probably shouldn’t be included.
As I said in the opener, I think that’s how arguments work in debates or on Twitter, but it’s not at all true of good discussions. With cooperative participants, it may take a good deal of discussion to identify cruxes and clarify the topic in ways that work toward agreement and better estimates of the truth. Presenting the arguments to relative novices similarly requires that depth before they’re going to understand what the argument really says, what assumptions it depends on, and therefore how valid it is to them.
Thanks! Honestly, I think this kind of project needs to get much more appreciation and should be done more by those who are very confident in their positions and would like to steelman the other side. I also often hear people very confident about their beliefs and truly have no idea what the bets counterarguments are—maybe this uncommon, but I went to an in-person rationalist meetup like last week, and the people were really confident but haven’t heard of a bunch of these counterarguments, which I though is not at all in the LessWrong spirit. That interaction was one of my inspirations for the post.
I think I agree, but I’m having a bit of trouble understanding how you would evaluate arguments so much differently than I am now. I would say my method is pretty different than that of twitter debates (in many ways, I am very sympathetic and influenced by the LessWrong approach). I think I could have made a list of cruxes of each argument, but I didn’t want the post to be too long—much fewer would read it which is why I recommended that people first get a grasp on the general arguments for AI being an existential risk right at the beginning (adding a credence or range, i think, is pretty silly given that people should be able to assign their own, and I’m just some random undergrad on the internet).
Yep—I totally agree. I don’t personally take the argument super seriously (though I attempted to steelman what that argument as I think other people take it very seriously). I was initially going to respond to every argument, but I didn’t want to make a 40+ minute post. I also did qualify that claim a bunch (as I did with others like the intractability argument)
Fair point. I do think the LeCun argument misunderstands a bunch about different aspects of the debate, but he’s probably smarter than me.
I think I’m gonna have to just disagree here. While I defintelely think finding cruxes are extremely important (and this sometimes requires much back and forth), there is a certain type of way arguments can go back and forth that I tend to think has little (and should have little) influence on beliefs—I’m open to being wrong, though!
Different but related point:
I think, generally, I largely agree with you on many things you’ve said and just appreciate the outside view more. A modest epistemology of sorts. Even if I don’t find an argument super compelling, if a bunch of people that I think are pretty smart do (Yann LeCun has done some groundbreaking work in AI stuff, so that seems like a reason to take him seriously), I’m still gonna write about it. This is another reason why I didn’t put credences on these arguments—let the people decide!