Are people leading MIRI thinking that Bay Area Rationalists and LW users (who care about AI safety) should start skill building for communications research? That seems like a good capability to build, perhaps even 80,000-hours-style, but obviously it could plausibly have some drawbacks.
My thinking about this is that geopolitical research, world modelling, and open-source intelligence requires way too many people doing very simple tasks, e.g. distilling and summarizing information from Bill Bishop’s newsletter that’s relevant to AI geopolitics and the general trend of US-China relations, and all sorts of similar things that people didn’t realize was relevant to the situation with AI until someone randomly made a Lesswrong post about it.
The impression I got from AI safety is that the AI alignment technical research people often have really good generalist skills and pick up important concepts quickly (e.g. psychology) and can do a lot with them, whereas AI governance people are often the type of person whose really good at getting government jobs. So I can see big upsides for AI alignment technical researchers spending some time reading things like chapter 4 of Joseph Nye’s Soft Power, for example, which is a really well-respected source on the history of propaganda and information warfare waged by governments.
I do not (yet) know that Nye resource so I don’t know if I endorse it. I do endorse the more general idea that many folks who understand the basics of AI x-risk could start talking more to their not-yet-clued-in friends and family about it.
I think in the past, many of us didn’t bring this up with people outside the bubble for a variety of reasons: we expected to be dismissed or misunderstood, it just seemed fruitless, or we didn’t want to freak them out.
I think it’s time to freak them out.
And what we’ve learned from the last seven months of media appearances and polling is that the general public is actually far more receptive to x-risk arguments than we (at MIRI) expected; we’ve been accustomed to the arguments bouncing off folks in tech, and we over-indexed on that. Now that regular people can play with GPT-4 and see what it does, discussion of AGI no longer feels like far-flung science fiction. They’re ready to hear it, and will only get more so as capabilities demonstrably advance.
We hope that if there is an upsurge of public demand, we might get regulation/legislation limiting the development and training of frontier AI systems and the sales and distribution of the high-end GPUs on which such systems are trained.
I do not (yet) know that Nye resource so I don’t know if I endorse it.
That makes sense, I would never ask for such an endorsement; I don’t think it would help MIRI directly, but Soft Power is one of the most influential concepts among modern international relations experts and China experts, and it’s critical for understanding the environment that AI safety public communication takes place in (e.g. if the world is already oversaturated with highly professionalized information warfare then that has big implications e.g. MIRI could be fed false data to mislead them into believing they are succeeding at describing the problem when in reality the needle isn’t moving).
I think in the past, many of us didn’t bring this up with people outside the bubble for a variety of reasons: we expected to be dismissed or misunderstood, it just seemed fruitless, or we didn’t want to freak them out.
I think it’s time to freak them out.
That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration. I even abstained from helping Darren McKee with his book attempting to describe the AI safety problem to the public, even though I wanted to, because I was worried about contributing to making people more capable of spreading AI safety ideas.
If MIRI has changed their calculus on this, then of course I will defer to that since I have a sense of how far outside my area of expertise it is. But it’s still a really big shift for me.
We hope that if there is an upsurge of public demand, we might get regulation/legislation limiting the development and training of frontier AI systems and the sales and distribution of the high-end GPUs on which such systems are trained.
I’m not sure what to make of this; AI advancement is pretty valuable for national security (e.g. allowing hypersonic nuclear missiles to continue flying under the radar if military GPS systems are destroyed or jammed/spoofed) and the balance of power between the US and China in other ways, similar to nuclear weapons; and if public opinion turned against nuclear weapons during the 1940s and 1950s, back when democracy was stronger, I’m not sure if it would have had much of an effect (perhaps it would have pushed it underground). I’m still deferring to MIRI on this pivot and will help other people take the advice in this comment, and I found this comment really helpful and I’m glad that big pivots are being committed to, but I’ll also still be confused and apprehensive.
That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
Awesome, that’s great to hear and these recommendations/guidance are helpful and more than I expected, thank you.
I can’t wait to see what you’ve cooked up for the upcoming posts, MIRI’s outputs (including decisions) are generally pretty impressive and outstanding, and I have a feeling that I’ll appreciate and benefit from them even more now that people are starting to pull out the stops.
Are people leading MIRI thinking that Bay Area Rationalists and LW users (who care about AI safety) should start skill building for communications research? That seems like a good capability to build, perhaps even 80,000-hours-style, but obviously it could plausibly have some drawbacks.
My thinking about this is that geopolitical research, world modelling, and open-source intelligence requires way too many people doing very simple tasks, e.g. distilling and summarizing information from Bill Bishop’s newsletter that’s relevant to AI geopolitics and the general trend of US-China relations, and all sorts of similar things that people didn’t realize was relevant to the situation with AI until someone randomly made a Lesswrong post about it.
The impression I got from AI safety is that the AI alignment technical research people often have really good generalist skills and pick up important concepts quickly (e.g. psychology) and can do a lot with them, whereas AI governance people are often the type of person whose really good at getting government jobs. So I can see big upsides for AI alignment technical researchers spending some time reading things like chapter 4 of Joseph Nye’s Soft Power, for example, which is a really well-respected source on the history of propaganda and information warfare waged by governments.
I do not (yet) know that Nye resource so I don’t know if I endorse it. I do endorse the more general idea that many folks who understand the basics of AI x-risk could start talking more to their not-yet-clued-in friends and family about it.
I think in the past, many of us didn’t bring this up with people outside the bubble for a variety of reasons: we expected to be dismissed or misunderstood, it just seemed fruitless, or we didn’t want to freak them out.
I think it’s time to freak them out.
And what we’ve learned from the last seven months of media appearances and polling is that the general public is actually far more receptive to x-risk arguments than we (at MIRI) expected; we’ve been accustomed to the arguments bouncing off folks in tech, and we over-indexed on that. Now that regular people can play with GPT-4 and see what it does, discussion of AGI no longer feels like far-flung science fiction. They’re ready to hear it, and will only get more so as capabilities demonstrably advance.
We hope that if there is an upsurge of public demand, we might get regulation/legislation limiting the development and training of frontier AI systems and the sales and distribution of the high-end GPUs on which such systems are trained.
That makes sense, I would never ask for such an endorsement; I don’t think it would help MIRI directly, but Soft Power is one of the most influential concepts among modern international relations experts and China experts, and it’s critical for understanding the environment that AI safety public communication takes place in (e.g. if the world is already oversaturated with highly professionalized information warfare then that has big implications e.g. MIRI could be fed false data to mislead them into believing they are succeeding at describing the problem when in reality the needle isn’t moving).
That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration. I even abstained from helping Darren McKee with his book attempting to describe the AI safety problem to the public, even though I wanted to, because I was worried about contributing to making people more capable of spreading AI safety ideas.
If MIRI has changed their calculus on this, then of course I will defer to that since I have a sense of how far outside my area of expertise it is. But it’s still a really big shift for me.
I’m not sure what to make of this; AI advancement is pretty valuable for national security (e.g. allowing hypersonic nuclear missiles to continue flying under the radar if military GPS systems are destroyed or jammed/spoofed) and the balance of power between the US and China in other ways, similar to nuclear weapons; and if public opinion turned against nuclear weapons during the 1940s and 1950s, back when democracy was stronger, I’m not sure if it would have had much of an effect (perhaps it would have pushed it underground). I’m still deferring to MIRI on this pivot and will help other people take the advice in this comment, and I found this comment really helpful and I’m glad that big pivots are being committed to, but I’ll also still be confused and apprehensive.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
Awesome, that’s great to hear and these recommendations/guidance are helpful and more than I expected, thank you.
I can’t wait to see what you’ve cooked up for the upcoming posts, MIRI’s outputs (including decisions) are generally pretty impressive and outstanding, and I have a feeling that I’ll appreciate and benefit from them even more now that people are starting to pull out the stops.