Mitchell gebru and bender express their opinions on such things in more detail in the video I linked. Here’s the overcompressed summary, which badly miscompresses the video, but which is a reasonable pitch for why you should watch it to get the full thing in order to respond to the points eloquently rather than using the facsimile. If you can put your annoyance at them missing the point about x-risk on hold and just try to empathize with their position having also been trying to ring alarm bells and being dismissed, and see how they’re feeling like the x-risk crowd is just controlled opposition being used to dismiss their warnings, I think it could be quite instructive.
I also strongly recommend watching this video—timestamp is about 30sec before the part I’m referencing—where Bengio and Tegmark have a discussion with, among others, Tawana Petty, and they also completely miss the point about present-day harms. In particular, note that as far as I can tell she’s not frustrated that they’re speaking up, she’s frustrated that they seem to be oblivious in conversation to what the present day harms even are; when she brings it up, they defend themselves as having already done something, which in my view misses the point because she was looking for action on present day harms to be weaved into the action they’re demanding from the start. “Why didn’t they speak up when Timnit got fired?” or so. She’s been pushing for people like them to speak up for years, and she appears to feel frustrated that even when they bring it up they won’t mention the things she sees as the core problems. Whether or not she’s right that the present day problems are the core, I agree enthusiastically that present day problems are intensely terrible and are a major issue we should in fact acknowledge and integrate into plans to take action as best we can. This will remain a point of tension, as some won’t want to “dilute” the issue by bringing up “controversial” issues like racism. But I’d like to at least zoom in on this core point of conflict, since it seems to get repeatedly missed. We need to not be redirecting away from this, but rather integrating. I don’t know how to do that off the top of my head. Tegmark responds to this, but I feel like it’s a pretty crappy response that was composed on the fly, and it’d be worth the time to ponder asynchronously how to respond more constructively.
“This has been killing people!”
“Yes, but it might kill all people!”
“Yes, but it’s killing people!”
“Of course, sure, whatever, it’s killing people, but it might kill all people!”
You can see how this is not a satisfying response. I don’t pretend to know what would be.
“Of course, sure, whatever, it’s killing people, but it might kill all people!”
But this isn’t the actual back-and-forth, the third point should be “no it won’t, you’re distracting from the people currently being killed!”. This is all a game to subtly beg the question. If AI is an existential threat, all current mundane threats like misinformation, job loss, AI bias, etc. are rounding errors to the total harm, the only situation where you’d talk about them is if you’ve already granted that the existential risks don’t exist.
If a large comet is heading towards Earth, and some group thinks it won’t actual hit Earth, but merely pass harmlessly close-by, and they start talking about the sun’s reflections off the asteroid making life difficult for people with sensitive eyes… they are trying to get you to assume the conclusion.
Sure, I agree, the asteroid is going to kill us all. But it would be courteous to acknowledge that it’s going to hit a poor area first, and they’ll die a few minutes earlier. Also, uh, all of us are going to die, I think that’s the core thing! we should save the poor area, and also all the other areas!
rounding errors to the total harm, the only situation where you’d talk about them is if you’ve already granted that the existential risks don’t exist
It’s possible to consider relatively irrelevant things, such as everything in ordinary human experience, even when there is an apocalypse on the horizon. The implied contextualizing norm asks for inability to consider them, or at least increases the cost.
Mitchellgebru and bender express their opinions on such things in more detail in the video I linked. Here’s the overcompressed summary, which badly miscompresses the video, but which is a reasonable pitch for why you should watch it to get the full thing in order to respond to the points eloquently rather than using the facsimile. If you can put your annoyance at them missing the point about x-risk on hold and just try to empathize with their position having also been trying to ring alarm bells and being dismissed, and see how they’re feeling like the x-risk crowd is just controlled opposition being used to dismiss their warnings, I think it could be quite instructive.I also strongly recommend watching this video—timestamp is about 30sec before the part I’m referencing—where Bengio and Tegmark have a discussion with, among others, Tawana Petty, and they also completely miss the point about present-day harms. In particular, note that as far as I can tell she’s not frustrated that they’re speaking up, she’s frustrated that they seem to be oblivious in conversation to what the present day harms even are; when she brings it up, they defend themselves as having already done something, which in my view misses the point because she was looking for action on present day harms to be weaved into the action they’re demanding from the start. “Why didn’t they speak up when Timnit got fired?” or so. She’s been pushing for people like them to speak up for years, and she appears to feel frustrated that even when they bring it up they won’t mention the things she sees as the core problems. Whether or not she’s right that the present day problems are the core, I agree enthusiastically that present day problems are intensely terrible and are a major issue we should in fact acknowledge and integrate into plans to take action as best we can. This will remain a point of tension, as some won’t want to “dilute” the issue by bringing up “controversial” issues like racism. But I’d like to at least zoom in on this core point of conflict, since it seems to get repeatedly missed. We need to not be redirecting away from this, but rather integrating. I don’t know how to do that off the top of my head. Tegmark responds to this, but I feel like it’s a pretty crappy response that was composed on the fly, and it’d be worth the time to ponder asynchronously how to respond more constructively.
“This has been killing people!”
“Yes, but it might kill all people!”
“Yes, but it’s killing people!”
“Of course, sure, whatever, it’s killing people, but it might kill all people!”
You can see how this is not a satisfying response. I don’t pretend to know what would be.
But this isn’t the actual back-and-forth, the third point should be “no it won’t, you’re distracting from the people currently being killed!”. This is all a game to subtly beg the question. If AI is an existential threat, all current mundane threats like misinformation, job loss, AI bias, etc. are rounding errors to the total harm, the only situation where you’d talk about them is if you’ve already granted that the existential risks don’t exist.
If a large comet is heading towards Earth, and some group thinks it won’t actual hit Earth, but merely pass harmlessly close-by, and they start talking about the sun’s reflections off the asteroid making life difficult for people with sensitive eyes… they are trying to get you to assume the conclusion.
Sure, I agree, the asteroid is going to kill us all. But it would be courteous to acknowledge that it’s going to hit a poor area first, and they’ll die a few minutes earlier. Also, uh, all of us are going to die, I think that’s the core thing! we should save the poor area, and also all the other areas!
It’s possible to consider relatively irrelevant things, such as everything in ordinary human experience, even when there is an apocalypse on the horizon. The implied contextualizing norm asks for inability to consider them, or at least increases the cost.