I do not (yet) know that Nye resource so I don’t know if I endorse it.
That makes sense, I would never ask for such an endorsement; I don’t think it would help MIRI directly, but Soft Power is one of the most influential concepts among modern international relations experts and China experts, and it’s critical for understanding the environment that AI safety public communication takes place in (e.g. if the world is already oversaturated with highly professionalized information warfare then that has big implications e.g. MIRI could be fed false data to mislead them into believing they are succeeding at describing the problem when in reality the needle isn’t moving).
I think in the past, many of us didn’t bring this up with people outside the bubble for a variety of reasons: we expected to be dismissed or misunderstood, it just seemed fruitless, or we didn’t want to freak them out.
I think it’s time to freak them out.
That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration. I even abstained from helping Darren McKee with his book attempting to describe the AI safety problem to the public, even though I wanted to, because I was worried about contributing to making people more capable of spreading AI safety ideas.
If MIRI has changed their calculus on this, then of course I will defer to that since I have a sense of how far outside my area of expertise it is. But it’s still a really big shift for me.
We hope that if there is an upsurge of public demand, we might get regulation/legislation limiting the development and training of frontier AI systems and the sales and distribution of the high-end GPUs on which such systems are trained.
I’m not sure what to make of this; AI advancement is pretty valuable for national security (e.g. allowing hypersonic nuclear missiles to continue flying under the radar if military GPS systems are destroyed or jammed/spoofed) and the balance of power between the US and China in other ways, similar to nuclear weapons; and if public opinion turned against nuclear weapons during the 1940s and 1950s, back when democracy was stronger, I’m not sure if it would have had much of an effect (perhaps it would have pushed it underground). I’m still deferring to MIRI on this pivot and will help other people take the advice in this comment, and I found this comment really helpful and I’m glad that big pivots are being committed to, but I’ll also still be confused and apprehensive.
That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
Awesome, that’s great to hear and these recommendations/guidance are helpful and more than I expected, thank you.
I can’t wait to see what you’ve cooked up for the upcoming posts, MIRI’s outputs (including decisions) are generally pretty impressive and outstanding, and I have a feeling that I’ll appreciate and benefit from them even more now that people are starting to pull out the stops.
That makes sense, I would never ask for such an endorsement; I don’t think it would help MIRI directly, but Soft Power is one of the most influential concepts among modern international relations experts and China experts, and it’s critical for understanding the environment that AI safety public communication takes place in (e.g. if the world is already oversaturated with highly professionalized information warfare then that has big implications e.g. MIRI could be fed false data to mislead them into believing they are succeeding at describing the problem when in reality the needle isn’t moving).
That’s pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration. I even abstained from helping Darren McKee with his book attempting to describe the AI safety problem to the public, even though I wanted to, because I was worried about contributing to making people more capable of spreading AI safety ideas.
If MIRI has changed their calculus on this, then of course I will defer to that since I have a sense of how far outside my area of expertise it is. But it’s still a really big shift for me.
I’m not sure what to make of this; AI advancement is pretty valuable for national security (e.g. allowing hypersonic nuclear missiles to continue flying under the radar if military GPS systems are destroyed or jammed/spoofed) and the balance of power between the US and China in other ways, similar to nuclear weapons; and if public opinion turned against nuclear weapons during the 1940s and 1950s, back when democracy was stronger, I’m not sure if it would have had much of an effect (perhaps it would have pushed it underground). I’m still deferring to MIRI on this pivot and will help other people take the advice in this comment, and I found this comment really helpful and I’m glad that big pivots are being committed to, but I’ll also still be confused and apprehensive.
“Don’t talk too much about how powerful AI could get because it will just make other people get excited and go faster” was a prevailing view at MIRI for a long time, I’m told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it’s better to speak openly about the risks.
Awesome, that’s great to hear and these recommendations/guidance are helpful and more than I expected, thank you.
I can’t wait to see what you’ve cooked up for the upcoming posts, MIRI’s outputs (including decisions) are generally pretty impressive and outstanding, and I have a feeling that I’ll appreciate and benefit from them even more now that people are starting to pull out the stops.