The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I’ll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it’s getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can’t really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI’s current presence in the news and much of the world’s psyche.
But I’m not super certain in anything, and generally came away with a lot of questions, here’s a few:
How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
This helps clarify the “6 months isn’t enough to develop the safety techniques they detail” objection which was fairly well addressed here as well as the “Should Open AI be at the front” objection.
How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking “Should we let machines flood our information channels with propaganda and untruth?” an important question, but one that to me seems to deviate away from AI x-risk concerns.
This is at least tangential to the “This letter felt rushed” objection, because even if you accept it was rushed, the next question is “Well, what’s our bar for how good something has to be before it is put out into the world?”
Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can’t recall any specific time I know of where open letters cause significant change at the global/national level.
Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don’t think he’s at that level, but I know other EAs who would be apt to characterize him that way.
Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at “Tristan Williams, Tea Brewer” and think “oh, what is he doing on this list?”
2. I think non-x-risk focused messages are a good idea because:
It is much easier to reach a wide audience this way.
It is clear that there are significant and important risks even if we completely exclude x-risk. We should have this discussion even in a world where for some reason we could be certain that humanity will survive for the next 100 years.
It widens Overton’s window. x-risk is still mostly considered to be a fringe position among the general public, although the situation has improved somewhat.
4. I don’t know much about EA’s concerns about Elon. Intuitively, he seems to be fine. But I think that in general, people are more biased towards too much distancing which often hinders coordination a lot.
5. I think more signatures cannot make things worse if authors are handling them properly. Just rough sorting by credentials (as FLI does) may be already good enough. But it’s possible and easy to be more aggressive here.
I agree that it’s unlikely that this letter will be net bad and that it’s possible it can make a significant positive impact. However, I don’t think people argued that it can be bad. Instead, people argued it could be better. It’s clearly not possible to do something like this every month, so it’s better to put a lot of attention to details and think really carefully about content and timing.
2. What is Overton’s window? Otherwise I think I probably agree, but one question is, once this non-x-risk campaign is underway, how to you keep it on track and prevent value drift? Or do you not see that as a pressing worry?
3. Cool, will have to check that out.
4. Completely agree, and just wonder what the best way to promote less distancing is.
Yeah, I suppose I’m just trying to put myself in the shoes of the FHI people here that coordinated this and feel like many comments here are a bit more lacking in compassion than I’d like, especially for more half baked negative takes. I also agree that we want to put attention into detail and timing, but there is also the world in which too much of this leads to nothing getting done, and it’s highly plausible to me that this had probably been an idea for long enough already to make that the case here.
The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I’ll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it’s getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can’t really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI’s current presence in the news and much of the world’s psyche.
But I’m not super certain in anything, and generally came away with a lot of questions, here’s a few:
How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
This helps clarify the “6 months isn’t enough to develop the safety techniques they detail” objection which was fairly well addressed here as well as the “Should Open AI be at the front” objection.
How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking “Should we let machines flood our information channels with propaganda and untruth?” an important question, but one that to me seems to deviate away from AI x-risk concerns.
This is at least tangential to the “This letter felt rushed” objection, because even if you accept it was rushed, the next question is “Well, what’s our bar for how good something has to be before it is put out into the world?”
Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can’t recall any specific time I know of where open letters cause significant change at the global/national level.
Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don’t think he’s at that level, but I know other EAs who would be apt to characterize him that way.
Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at “Tristan Williams, Tea Brewer” and think “oh, what is he doing on this list?”
2. I think non-x-risk focused messages are a good idea because:
It is much easier to reach a wide audience this way.
It is clear that there are significant and important risks even if we completely exclude x-risk. We should have this discussion even in a world where for some reason we could be certain that humanity will survive for the next 100 years.
It widens Overton’s window. x-risk is still mostly considered to be a fringe position among the general public, although the situation has improved somewhat.
3. There were cases when it worked well. For example, the Letter of three hundred.
4. I don’t know much about EA’s concerns about Elon. Intuitively, he seems to be fine. But I think that in general, people are more biased towards too much distancing which often hinders coordination a lot.
5. I think more signatures cannot make things worse if authors are handling them properly. Just rough sorting by credentials (as FLI does) may be already good enough. But it’s possible and easy to be more aggressive here.
I agree that it’s unlikely that this letter will be net bad and that it’s possible it can make a significant positive impact. However, I don’t think people argued that it can be bad. Instead, people argued it could be better. It’s clearly not possible to do something like this every month, so it’s better to put a lot of attention to details and think really carefully about content and timing.
2. What is Overton’s window? Otherwise I think I probably agree, but one question is, once this non-x-risk campaign is underway, how to you keep it on track and prevent value drift? Or do you not see that as a pressing worry?
3. Cool, will have to check that out.
4. Completely agree, and just wonder what the best way to promote less distancing is.
Yeah, I suppose I’m just trying to put myself in the shoes of the FHI people here that coordinated this and feel like many comments here are a bit more lacking in compassion than I’d like, especially for more half baked negative takes. I also agree that we want to put attention into detail and timing, but there is also the world in which too much of this leads to nothing getting done, and it’s highly plausible to me that this had probably been an idea for long enough already to make that the case here.
Thanks for responding though! Much appreciated :)