I’m mostly on the same page there, although in the case of AI I do worry about the NSA or whoever getting a first-mover advantage in terms of manufacturing consent. (This last concept being from Noam Chomsky’s book of the same name, if you or other readers are unfamiliar with the term.)
Is that a thing to be terrified about, or a thing to celebrate and advocate? I’m deeply unsure. I think America is mostly better than its competitors in terms of the kind of craziness it puts out into the world and the kind of atrocities it commits and supports. I.e., a mix of good and bad crazy, and a relatively limited but still way-too-high number of atrocities.
What bad consequences can we expect from an unbalanced ability to manufacture consent? The same consequences Chomsky was pointing to in his original book—deep and unwavering public support for things that are undeniably atrocities, like (in Chomsky’s case) the various horrible crimes the US committed in Vietnam, such as the use of Agent Orange, or the My Lai massacre.
These are the predictable and nigh-inevitable consequences I foresee from allowing a government monopoly on the use of strong large language models. We are, of course, in no danger of having a literal government monopoly, but we’re in significant danger of having all capable large language models in the hands of either governments or large profit-driven corporations, which still seems like it could generate atrocities if we allow it to.
So, again, I converge to “90 days plus or minus a few decades, as my conscience and my desire to stay out of jail dictate”.
I’m still talking about the pandemic thing, not AI. If we’re “mostly on the same page” that publishing and publicizing the blog post (with a step-by-step recipe for making a deadly novel pandemic virus) is a bad idea, then I think you should edit your post, right?
Unfortunately, I lack the necessary karma to edit this post (or post a new one) for the next five days. I feel like I stand by what I wrote, as defended and clarified in the comments. My planned edits would mostly just be a discussion of the standard responsible disclosure policy that computer security researchers follow, and how I think it should be adapted in the light of anticipated planetary-scale impact.
So it seems like I’ve done what I am able to do for the moment. If you really think it’s setting a bad example or fundamentally misguided about something important, I would write your own post outlining points of agreement and disagreement, or I would wait five days for my karma to reset.
Thank you for your insightful comments, questions, and pushback. You’ve helped me clarify my internal models and reframe them into much more standard, sane-sounding arguments.
I’m mostly on the same page there, although in the case of AI I do worry about the NSA or whoever getting a first-mover advantage in terms of manufacturing consent. (This last concept being from Noam Chomsky’s book of the same name, if you or other readers are unfamiliar with the term.)
Is that a thing to be terrified about, or a thing to celebrate and advocate? I’m deeply unsure. I think America is mostly better than its competitors in terms of the kind of craziness it puts out into the world and the kind of atrocities it commits and supports. I.e., a mix of good and bad crazy, and a relatively limited but still way-too-high number of atrocities.
What bad consequences can we expect from an unbalanced ability to manufacture consent? The same consequences Chomsky was pointing to in his original book—deep and unwavering public support for things that are undeniably atrocities, like (in Chomsky’s case) the various horrible crimes the US committed in Vietnam, such as the use of Agent Orange, or the My Lai massacre.
These are the predictable and nigh-inevitable consequences I foresee from allowing a government monopoly on the use of strong large language models. We are, of course, in no danger of having a literal government monopoly, but we’re in significant danger of having all capable large language models in the hands of either governments or large profit-driven corporations, which still seems like it could generate atrocities if we allow it to.
So, again, I converge to “90 days plus or minus a few decades, as my conscience and my desire to stay out of jail dictate”.
I’m still talking about the pandemic thing, not AI. If we’re “mostly on the same page” that publishing and publicizing the blog post (with a step-by-step recipe for making a deadly novel pandemic virus) is a bad idea, then I think you should edit your post, right?
Unfortunately, I lack the necessary karma to edit this post (or post a new one) for the next five days. I feel like I stand by what I wrote, as defended and clarified in the comments. My planned edits would mostly just be a discussion of the standard responsible disclosure policy that computer security researchers follow, and how I think it should be adapted in the light of anticipated planetary-scale impact.
So it seems like I’ve done what I am able to do for the moment. If you really think it’s setting a bad example or fundamentally misguided about something important, I would write your own post outlining points of agreement and disagreement, or I would wait five days for my karma to reset.
Thank you for your insightful comments, questions, and pushback. You’ve helped me clarify my internal models and reframe them into much more standard, sane-sounding arguments.
OK, I’ll add the stuff about responsible disclosure to the post. I agree that it’s a very important omission from my internal mental model of things.