There simply don’t exist arguments with the level of rigor needed to justify a claim such as this one without any accompanying uncertainty:
If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.
I think this passage, meanwhile, rather misrepresents the situation to a typical reader:
When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
This isn’t “the insider conversation”. It’s (the partner of) one particular insider, who exists on the absolute extreme end of what insiders think, especially if we restrict ourselves to those actively engaged with research in the last several years. A typical reader could easily come away from that passage thinking otherwise.
Would you say the same thing about the negations of that claim? If you saw e.g. various tech companies and politicians talking about how they’re going to build AGI and then [something that implies that people will still be alive afterwards] would you call them out and say they need to qualify their claim with uncertainty or else they are being unreasonable?
Re: the insider conversation: Yeah, I guess it depends on what you mean by ‘the insider conversation’ and whether you think the impression random members of the public will get from these passages brings them closer or farther away from understanding what’s happening. My guess is that it brings them closer to understanding what’s happening; people just do not realize how seriously experts take the possibility that literally AGI will literally happen and literally kill literally everyone. It’s a serious possibility. I’d even dare to guess that the majority of people building AGI (weighted by how much they are contributing) think it’s a serious possibility, which maybe we can quantify as >5% or so, despite the massive psychological pressure of motivated cognition / self-serving rationalization to think otherwise. And the public does not realize this yet, I think.
Also, on a more personal level, I’ve felt exactly the same way about my own daughter for the past two years or so, ever since my timelines shortened.
Would you say the same thing about the negations of that claim? If you saw e.g. various tech companies and politicians talking about how they’re going to build AGI and then [something that implies that people will still be alive afterwards] would you call them out and say they need to qualify their claim with uncertainty or else they are being unreasonable?
Yes, I do in fact say the same thing to professions of absolute certainty that there is nothing to worry about re: AI x-risk.
The negation of the claim would not be “There is definitely nothing to worry about re AI x-risk.” It would be something much more mundane-sounding, like “It’s not the case that if we go ahead with building AGI soon, we all die.”
That said, yay—insofar as you aren’t just applying a double standard here, then I’ll agree with you. It would have been better if Yud added in some uncertainty disclaimers.
The negation of the claim would not be “There is definitely nothing to worry about re AI x-risk.” It would be something much more mundane-sounding, like “It’s not the case that if we go ahead with building AGI soon, we all die.”
I debated with myself whether to present the hypothetical that way. I chose not to, because of Eliezer’s recent history of extremely confident statements on the subject. I grant that the statement I quoted in isolation could be interpreted more mundanely, like the example you give here.
When the stakes are this high and the policy proposals are such as in this article, I think clarity about how confident you are isn’t optional. I would also take issue with the mundanely phrased version of the negation.
(For context, I’m working full-time on AI x-risk, so if I were going to apply a double-standard, it wouldn’t be in favor of people with a tendency to dismiss it as a concern.)
Thank you for your service! You may be interested to know that I think Yudkowsky writing this article will probably have on balance more bad consequences than good; Yudkowsky is obnoxious, arrogant, and most importantly, disliked, so the more he intertwines himself with the idea of AI x-risk in the public imagination, the less likely it is that the public will take those ideas seriously. Alas. I don’t blame him too much for it because I sympathize with his frustration & there’s something to be said for the policy of “just tell it like it is, especially when people ask.” But yeah, I wish this hadn’t happened.
(Also, sorry for the downvotes, I at least have been upvoting you whilst agreement-downvoting)
Who else is gonna write the article? My sense is that no one (including me) is starkly stating publically the seriousness of the situation.
“Yudkowsky is obnoxious, arrogant, and most importantly, disliked, so the more he intertwines himself with the idea of AI x-risk in the public imagination, the less likely it is that the public will take those ideas seriously”
I’m worried about people making character attacks on Yudkowsky (or other alignment researchers) like this. I think the people who think they can probably solve alignment by just going full-speed ahead and winging it, they are arrogant. Yudkowsky’s arrogant-sounding comments about how we need to be very careful and slow, are negligible in comparison. I’m guessing you agree with this (not sure) and we should be able to criticise him for his communication style, but I am a little worried about people publically undermining Yudkowsky’s reputation in that context. This seems like not what we would do if we were trying to coordinate well.
I agree that there’s a need for this sort of thing to be said loudly. (I’ve been saying similar things publicly, in the sense of anyone-can-go-see-that-I-wrote-it-on-LW, but not in the sense of putting it into major news outlets that are likely to get lots of eyeballs)
I’m worried about people making character attacks on Yudkowsky (or other alignment researchers) like this. I think the people who think they can probably solve alignment by just going full-speed ahead and winging it, they are arrogant. Yudkowsky’s arrogant-sounding comments about how we need to be very careful and slow, are negligible in comparison. I’m guessing you agree with this (not sure) and we should be able to criticise him for his communication style, but I am a little worried about people publically undermining Yudkowsky’s reputation in that context. This seems like not what we would do if we were trying to coordinate well.
I do agree with that. I think Yudkowsky, despite his flaws,* is a better human being than most people, and a much better rationalist/thinker. He is massively underrated. However, given that he is so disliked, it would be good if the Public Face of AI Safety was someone other than him, and I don’t see a problem with saying so.
(*I’m not counting ‘being disliked’ as a flaw btw, I do mean actual flaws—e.g. arrogance, overconfidence.)
I agree that this article is net negative, and I would go further: It has a non-trivial chance of irreparably damaging relationships and making the AI Alignment community look like fools, primarily due to the call for violence.
This is a case where the precautionary principle grants a great deal of rhetorical license. If you think there might be a lion in the bush, do you have a long and nuanced conversation about it, or do you just tell your tribe, “There’s a line in that bush. Back away.”?
X-risks tend to be more complicated beasts than lions in bushes, in that successfully avoiding them requires a lot more than reflexive action: we’re not going to navigate them by avoiding carefully understanding them.
I actually agree entirely. I just don’t think that we need to explore those x-risks by exposing ourselves to them. I think we’ve already advanced AI enough to start understanding and thinking about those x-risks, and an indefinite (perhaps not permanent) pause in development will enable us to get our bearings.
Say what you need to say now to get away from the potential lion. Then back at the campfire, talk it through.
There simply don’t exist arguments with the level of rigor needed to justify a claim such as this one without any accompanying uncertainty:
I think this passage, meanwhile, rather misrepresents the situation to a typical reader:
This isn’t “the insider conversation”. It’s (the partner of) one particular insider, who exists on the absolute extreme end of what insiders think, especially if we restrict ourselves to those actively engaged with research in the last several years. A typical reader could easily come away from that passage thinking otherwise.
Would you say the same thing about the negations of that claim? If you saw e.g. various tech companies and politicians talking about how they’re going to build AGI and then [something that implies that people will still be alive afterwards] would you call them out and say they need to qualify their claim with uncertainty or else they are being unreasonable?
Re: the insider conversation: Yeah, I guess it depends on what you mean by ‘the insider conversation’ and whether you think the impression random members of the public will get from these passages brings them closer or farther away from understanding what’s happening. My guess is that it brings them closer to understanding what’s happening; people just do not realize how seriously experts take the possibility that literally AGI will literally happen and literally kill literally everyone. It’s a serious possibility. I’d even dare to guess that the majority of people building AGI (weighted by how much they are contributing) think it’s a serious possibility, which maybe we can quantify as >5% or so, despite the massive psychological pressure of motivated cognition / self-serving rationalization to think otherwise. And the public does not realize this yet, I think.
Also, on a more personal level, I’ve felt exactly the same way about my own daughter for the past two years or so, ever since my timelines shortened.
Yes, I do in fact say the same thing to professions of absolute certainty that there is nothing to worry about re: AI x-risk.
The negation of the claim would not be “There is definitely nothing to worry about re AI x-risk.” It would be something much more mundane-sounding, like “It’s not the case that if we go ahead with building AGI soon, we all die.”
That said, yay—insofar as you aren’t just applying a double standard here, then I’ll agree with you. It would have been better if Yud added in some uncertainty disclaimers.
I debated with myself whether to present the hypothetical that way. I chose not to, because of Eliezer’s recent history of extremely confident statements on the subject. I grant that the statement I quoted in isolation could be interpreted more mundanely, like the example you give here.
When the stakes are this high and the policy proposals are such as in this article, I think clarity about how confident you are isn’t optional. I would also take issue with the mundanely phrased version of the negation.
(For context, I’m working full-time on AI x-risk, so if I were going to apply a double-standard, it wouldn’t be in favor of people with a tendency to dismiss it as a concern.)
Thank you for your service! You may be interested to know that I think Yudkowsky writing this article will probably have on balance more bad consequences than good; Yudkowsky is obnoxious, arrogant, and most importantly, disliked, so the more he intertwines himself with the idea of AI x-risk in the public imagination, the less likely it is that the public will take those ideas seriously. Alas. I don’t blame him too much for it because I sympathize with his frustration & there’s something to be said for the policy of “just tell it like it is, especially when people ask.” But yeah, I wish this hadn’t happened.
(Also, sorry for the downvotes, I at least have been upvoting you whilst agreement-downvoting)
“But yeah, I wish this hadn’t happened.”
Who else is gonna write the article? My sense is that no one (including me) is starkly stating publically the seriousness of the situation.
“Yudkowsky is obnoxious, arrogant, and most importantly, disliked, so the more he intertwines himself with the idea of AI x-risk in the public imagination, the less likely it is that the public will take those ideas seriously”
I’m worried about people making character attacks on Yudkowsky (or other alignment researchers) like this. I think the people who think they can probably solve alignment by just going full-speed ahead and winging it, they are arrogant. Yudkowsky’s arrogant-sounding comments about how we need to be very careful and slow, are negligible in comparison. I’m guessing you agree with this (not sure) and we should be able to criticise him for his communication style, but I am a little worried about people publically undermining Yudkowsky’s reputation in that context. This seems like not what we would do if we were trying to coordinate well.
I agree that there’s a need for this sort of thing to be said loudly. (I’ve been saying similar things publicly, in the sense of anyone-can-go-see-that-I-wrote-it-on-LW, but not in the sense of putting it into major news outlets that are likely to get lots of eyeballs)
I do agree with that. I think Yudkowsky, despite his flaws,* is a better human being than most people, and a much better rationalist/thinker. He is massively underrated. However, given that he is so disliked, it would be good if the Public Face of AI Safety was someone other than him, and I don’t see a problem with saying so.
(*I’m not counting ‘being disliked’ as a flaw btw, I do mean actual flaws—e.g. arrogance, overconfidence.)
Thanks, I appreciate the spirit with which you’ve approached the conversation. It’s an emotional topic for people I guess.
I agree that this article is net negative, and I would go further: It has a non-trivial chance of irreparably damaging relationships and making the AI Alignment community look like fools, primarily due to the call for violence.
FWIW I think it’s pretty unfair and misleading to characterize what he said as a call for violence.
I’ve been persuaded in the comment threads that I was wrong on Eliezer specifically advocating violence, so I retract my earlier comment.
This is a case where the precautionary principle grants a great deal of rhetorical license. If you think there might be a lion in the bush, do you have a long and nuanced conversation about it, or do you just tell your tribe, “There’s a line in that bush. Back away.”?
X-risks tend to be more complicated beasts than lions in bushes, in that successfully avoiding them requires a lot more than reflexive action: we’re not going to navigate them by avoiding carefully understanding them.
I actually agree entirely. I just don’t think that we need to explore those x-risks by exposing ourselves to them. I think we’ve already advanced AI enough to start understanding and thinking about those x-risks, and an indefinite (perhaps not permanent) pause in development will enable us to get our bearings.
Say what you need to say now to get away from the potential lion. Then back at the campfire, talk it through.
If there were a game-theoretically reliable way to get everyone to pause all together, I’d support it.
Because the bush may have things you need and the pLion is low. There are tradeoffs you are ignoring.
Proposition 1: Powerful systems come with no x-risk
Proposition 2: Powerful systems come with x-risk
You can prove / disprove 2 by proving or disproving 1.
Why is it that a lot of [1,0] people believe that the [0,1] group should prove their case? [1]
And also ignore all the arguments that have been offered.