What is the “first mover advantage”? Are you worried about the CDC itself creating and releasing deadly novel global pandemics? To me, that seems like a crazy thing to be worried about. Nobody thinks that creating and releasing deadly novel global pandemics is a good idea, except from crazy ideologues like Seiichi Endo. Regrettably, crazy ideologues do exist. But they probably don’t exist among CDC employees.
I would expect the CDC to “engage with me in serious dialogue and good faith”. More specifically, I expect that I would show them the instructions, and they would say “Oh crap, that sucks. There’s nothing to do about that, except try to delay the dissemination of that information as long as possible, and meanwhile try to solve the general problem of pandemic prevention and mitigation. …Which we have already been trying to solve for decades. We’re making incremental progress but we sure aren’t going to finish anytime soon. If you want to work on the general problem of pandemic prevention and mitigation, that’s great! Go start a lab and apply for NIH grants. Go lobby politicians for more pandemic prevention funding. Go set up wastewater monitoring and invent better rapid diagnostics. Etc. etc. There’s plenty of work to do, and we need all the help we can get, God knows. And tell all your friends to work on pandemic prevention too.”
If the CDC says that, and then goes back to continuing the pandemic prevention projects that they were already working on, would you still advocate my publishing the blog post after 90 days? Can you spell out exactly what bad consequences you expect if I don’t publish it?
I’m mostly on the same page there, although in the case of AI I do worry about the NSA or whoever getting a first-mover advantage in terms of manufacturing consent. (This last concept being from Noam Chomsky’s book of the same name, if you or other readers are unfamiliar with the term.)
Is that a thing to be terrified about, or a thing to celebrate and advocate? I’m deeply unsure. I think America is mostly better than its competitors in terms of the kind of craziness it puts out into the world and the kind of atrocities it commits and supports. I.e., a mix of good and bad crazy, and a relatively limited but still way-too-high number of atrocities.
What bad consequences can we expect from an unbalanced ability to manufacture consent? The same consequences Chomsky was pointing to in his original book—deep and unwavering public support for things that are undeniably atrocities, like (in Chomsky’s case) the various horrible crimes the US committed in Vietnam, such as the use of Agent Orange, or the My Lai massacre.
These are the predictable and nigh-inevitable consequences I foresee from allowing a government monopoly on the use of strong large language models. We are, of course, in no danger of having a literal government monopoly, but we’re in significant danger of having all capable large language models in the hands of either governments or large profit-driven corporations, which still seems like it could generate atrocities if we allow it to.
So, again, I converge to “90 days plus or minus a few decades, as my conscience and my desire to stay out of jail dictate”.
I’m still talking about the pandemic thing, not AI. If we’re “mostly on the same page” that publishing and publicizing the blog post (with a step-by-step recipe for making a deadly novel pandemic virus) is a bad idea, then I think you should edit your post, right?
Unfortunately, I lack the necessary karma to edit this post (or post a new one) for the next five days. I feel like I stand by what I wrote, as defended and clarified in the comments. My planned edits would mostly just be a discussion of the standard responsible disclosure policy that computer security researchers follow, and how I think it should be adapted in the light of anticipated planetary-scale impact.
So it seems like I’ve done what I am able to do for the moment. If you really think it’s setting a bad example or fundamentally misguided about something important, I would write your own post outlining points of agreement and disagreement, or I would wait five days for my karma to reset.
Thank you for your insightful comments, questions, and pushback. You’ve helped me clarify my internal models and reframe them into much more standard, sane-sounding arguments.
What is the “first mover advantage”? Are you worried about the CDC itself creating and releasing deadly novel global pandemics? To me, that seems like a crazy thing to be worried about. Nobody thinks that creating and releasing deadly novel global pandemics is a good idea, except from crazy ideologues like Seiichi Endo. Regrettably, crazy ideologues do exist. But they probably don’t exist among CDC employees.
I would expect the CDC to “engage with me in serious dialogue and good faith”. More specifically, I expect that I would show them the instructions, and they would say “Oh crap, that sucks. There’s nothing to do about that, except try to delay the dissemination of that information as long as possible, and meanwhile try to solve the general problem of pandemic prevention and mitigation. …Which we have already been trying to solve for decades. We’re making incremental progress but we sure aren’t going to finish anytime soon. If you want to work on the general problem of pandemic prevention and mitigation, that’s great! Go start a lab and apply for NIH grants. Go lobby politicians for more pandemic prevention funding. Go set up wastewater monitoring and invent better rapid diagnostics. Etc. etc. There’s plenty of work to do, and we need all the help we can get, God knows. And tell all your friends to work on pandemic prevention too.”
If the CDC says that, and then goes back to continuing the pandemic prevention projects that they were already working on, would you still advocate my publishing the blog post after 90 days? Can you spell out exactly what bad consequences you expect if I don’t publish it?
I’m mostly on the same page there, although in the case of AI I do worry about the NSA or whoever getting a first-mover advantage in terms of manufacturing consent. (This last concept being from Noam Chomsky’s book of the same name, if you or other readers are unfamiliar with the term.)
Is that a thing to be terrified about, or a thing to celebrate and advocate? I’m deeply unsure. I think America is mostly better than its competitors in terms of the kind of craziness it puts out into the world and the kind of atrocities it commits and supports. I.e., a mix of good and bad crazy, and a relatively limited but still way-too-high number of atrocities.
What bad consequences can we expect from an unbalanced ability to manufacture consent? The same consequences Chomsky was pointing to in his original book—deep and unwavering public support for things that are undeniably atrocities, like (in Chomsky’s case) the various horrible crimes the US committed in Vietnam, such as the use of Agent Orange, or the My Lai massacre.
These are the predictable and nigh-inevitable consequences I foresee from allowing a government monopoly on the use of strong large language models. We are, of course, in no danger of having a literal government monopoly, but we’re in significant danger of having all capable large language models in the hands of either governments or large profit-driven corporations, which still seems like it could generate atrocities if we allow it to.
So, again, I converge to “90 days plus or minus a few decades, as my conscience and my desire to stay out of jail dictate”.
I’m still talking about the pandemic thing, not AI. If we’re “mostly on the same page” that publishing and publicizing the blog post (with a step-by-step recipe for making a deadly novel pandemic virus) is a bad idea, then I think you should edit your post, right?
Unfortunately, I lack the necessary karma to edit this post (or post a new one) for the next five days. I feel like I stand by what I wrote, as defended and clarified in the comments. My planned edits would mostly just be a discussion of the standard responsible disclosure policy that computer security researchers follow, and how I think it should be adapted in the light of anticipated planetary-scale impact.
So it seems like I’ve done what I am able to do for the moment. If you really think it’s setting a bad example or fundamentally misguided about something important, I would write your own post outlining points of agreement and disagreement, or I would wait five days for my karma to reset.
Thank you for your insightful comments, questions, and pushback. You’ve helped me clarify my internal models and reframe them into much more standard, sane-sounding arguments.
OK, I’ll add the stuff about responsible disclosure to the post. I agree that it’s a very important omission from my internal mental model of things.