The following are my rough notes on each subsection in those two subsections, summarizing what I understand each to mean, and my personal thoughts.
My high level thoughts are at the bottom.
Section by section
Section 4 – Ensuring the Safety and Security of AI Technology.
4.1
Summary:
The secretary of commerce and NIST are going to develop guidelines and best practices for AI systems.
In particular:
“launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
What does this literally mean? Does this allocate funding towards research to develop these benchmarks? What will concretely happen in the world as a result of this initiative?
It also calls for the establishment of guidelines for conducting red-teaming.
[[quote]]
(ii) Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. These efforts shall include:
(A) coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and
(B) in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order.
Commentary:
I imagine that these standards and guidelines are going to be mostly fake.
Are there real guidelines somewhere in the world? What process leads to real guidelines?
4.2
Summary:
a
Anyone who has or wants to train a foundation model, needs to
Report their training plans and safeguards.
Report who has access to the model weights, and the cybersecurity protecting them
The results of red-teaming on those models, and what they did to meet the safety bars
Anyone with a big enough computing cluster needs to report that they have it.
b
The Secretary of Commerce (and some associated agencies) will make (and continually update) some standards for models and computer clusters that are subject to the above reporting requirements. But for the time being,
Any models that were trained with more than 10^26 flops
Any models that are trained primarily on biology data and trained using greater than 10^23 flops
Any datacenter that connected with greater than 100 gigabits per second
Any datacenter that can train an AI at 10^20 flops
c
I don’t know what this subsection is about. Something about protection cyber security for “United States Infrastructure as a Service” products.
This includes some tracking of when foreigners want to use US AI systems in ways that might pose a cyber-security risk, using standards identical to the ones laid out above.
d
More stuff about IaaS, and verifying the identity of foreigners.
Thoughts:
Do those numbers add up? It seems like if you’re worried about models that were trained on 10^26 flops in total, you should be worried about much smaller training speed thresholds than 10^20 flops per second? 10^19 flops per second, would allow you to train a 10^26 model in 115 days, e.g. about 4 months. Those standards don’t seem consistent.
What do I think about this overall?
I mean, I guess reporting this stuff to the government is a good stepping stone for more radical action, but it depends on what the government decides to do with the reported info.
The thresholds match those that I’ve seen in strategy documents of people that I respect, so that that seems promising. My understanding is that 10^26 FLOPS is about 1-2 orders of magnitude larger than our current biggest models.
The interest in red-teaming is promising, but again it depends on the implementation details.
I’m very curious about “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
What will concretely happen in the world as a result of “an initiative”? Does that mean allocating funding to orgs doing this kind of work? Does it mean setting up some kind of government agency like NIST to…invent benchmarks?
4.3
Summary:
They want to protect against AI cyber-security attacks. Mostly this entails government agencies issuing reports.
a – Some actions aimed at protecting “critical infrastructure” (whatever that means).
Heads of major agencies need to provide an annual report to the Secretary of Homeland security on potential ways that AIs open vulnerabilities to critical infrastructure in their purview.
“…The Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.”
Government orgs will incorporate some new guidelines.
The secretary of homeland security will work with government agencies to mandate guidelines.
Homeland security will make an advisory committee to “provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.”
b – Using AI to improve cybersecurity
One piece that is interesting in that: “the Secretary of Defense and the Secretary of Homeland Security shall…each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks”, and then report on their results
Commentary
This is mostly about issuing reports, and guidelines. I have little idea if any of that is real or if this is just an expansion of lost-purpose bureaucracy. My guess is that there will be few people in the systems that have inside views that allow them to write good guidelines for their domains of responsibility regarding AI, and mostly these reports will be epistemically conservative and defensible, with a lot of “X is possibly a risk” where the authors have large uncertainty about how large the risk is.
Trying to use AI to improve cyber security sure is interesting. I hope that they can pull that off. It seems like one of the things that ~ needs to happen for the world to end up in a good equilibrium is for computer security to get a lot better. Otherwise anyone developing a powerful model will have the weights stolen, and there’s a really vulnerable vector of attack for not-even-very-capable AI systems. I think the best hope for that is using our AI systems to shore up computer security defense, and hope that at higher-than-human levels of competence, cyber warfare is not so offense-dominant. (As an example, someone suggested maybe using AI to write a secure successor to C, and the using AI to “swap out” the lower layers of our computing stacks with that more secure low level language.)
Could that possibly happen in government? I generally expect that private companies would be way more competent at this kind of technical research, but maybe the NSA is a notable and important exception? If they’re able to stay ten years ahead in cryptography, maybe they can stay 10 years ahead in AI cyberdefense.
This raises the question, what advantage allows the NSA to stay 10 years ahead? I assume that it is a combination of being able to recruit top talent, and that there are things that they are allowed to do that would be illegal for anyone else. But I don’t actually know if that’s true.
4.4 – For reducing AI-mediated CHEMICAL, BIOLOGICAL, RADIOLOGICAL, AND NUCLEAR threats, focusing on biological weapons in particular.
Summary:
a
The Secretary of Homeland Security (with help from other executive departments) will “evaluate” the potential of AI to both increase and to defend against these threats. This entails talking with experts and then submitting a report to the president.
In particular, it orders the Secretary of Defense (with the help of some other governmental agencies) to conduct a study that “assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks”, evaluates the risks associated with the biology datasets used to train such systems, assesses ways to use AI to reduces biosecurity risks.
b – Specifically to reduce risks from synthetic DNA and RNA.
The office of science and technology policy (with the help of other executive departments) are going to develop a “framework” for synthetic DNA/RNA companies to “implement procurement and screening mechanisms”. This entails developing “criteria and mechanisms” for identifying dangerous nucleotide sequences, and establishing mechanism for doing at-scale screening of synthetic nucleotides.
Once such a framework is in place, all (government?) funding agencies that fund life science research will make compliance with that framework a condition of funding.
All of this, once set up, needs to be evaluated and stress tested, and then a report sent to the relevant agencies.
Commentary:
The part about setting up a framework for mandatory screening of nucleotide sequences, seems non-fake. Or at least it is doing more than commissioning assessments and reports.
And it seems like a great idea to me! Even aside from AI concerns, my understanding is that the manufacture synthetic DNA is one major vector of biorisk. If you can effectively identify dangerous nucleotide sequences (and that is the part that seems most suspicious to me), this is one of the few obvious places to enforce strong legal requirements. These are not (yet) legal requirements, but making this a condition of funding seems like a great step.
4.5
Summary
Aims to increase the general ability for identifying AI generated content, and mark all Federal AI generated content as such.
a
The secretary of commerce will produce a report on the current and likely-future methods for, authenticating non-AI content, identifying AI content, watermarking AI content, preventing AI systems from “producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual)”
b
Using that report, the Secretary of Commerce will develop guidelines for detecting and authenticating AI content.
c
Those guidelines will be issued to relevant federal agencies
d
Possibly those guidelines will be folded into the Federal Acquisitions Regulation (whatever that is)
Commentary
Seems generally good to be able to distinguish between AI generated material and non-AI generated material. I’m not sure if this process will turn up anything real that meaningfully impacts anyone’s experience of communications from the government.
4.6
Summary
The Secretary of Commerce is responsible for running a “consultation process on potential risks, benefits, other implications” of open source foundation models, and then for submitting a report to the president on the results.
Commentary
More assessments and reports.
This does tell me that someone in the executive department has gotten the memo that open source models mean that it is easy to remove the safeguards that companies try to put in them.
4.7
Summary
Some stuff about federal data that might be used to train AI Systems. It seems like they want to restrict the data that might enable CBRN weapons or cyberattacks, but otherwise make the data public?
Commentary
I think I don’t care very much about this?
4.8
Summary
This orders a National Security Memorandum on AI to be submitted to the president. This memorandum is supposed to “provide guidance to the Department of Defense, other relevant agencies”
Commentary:
I don’t think that I care about this?
Section 5 – Promoting Innovation and Competition.
5.1 – Attracting AI Talent to the United States.
Summary
This looks like a bunch of stuff to make it easier for foreign workers with AI relevant expertise to get visas, and to otherwise make it easy for them to come to, live in, work in, and stay in, the US.
Commentary
I don’t know the sign of this.
Do we want AI talent to be concentrated in one country?
On the one hand that seems like it accelerates timelines some, especially if there are 99.9% top tier AI researchers that wouldn’t otherwise be able to get visas, but who can now work at OpenAI. (It would surprise me if this is the case? Those people should all be able to get O1 visas, right?)
On the other hand, the more AI talent is concentrated in one country the smaller jurisdiction of the regulatory regime that slows down AI. If enough of the AI talent is in the US, regulations that slow down AI development in the US only have a substantial impact, at least the the short term, before that talent moves, but maybe also in the long term, if researchers care more about continuing to live in the US than they do about making cutting edge AI progress.
5.2
Summary
a –
The director of the NSF will do a bunch of things to spur AI research.
…”launch a pilot program implementing the National AI Research Resource (NAIRR)”. This is evidently something that is intended to boost AI research, but I’m not clear on what it is or what it does.
…”fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs.”
…”establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order.”
b –
The Secretary of Energy will make a pilot program for training AI scientists.
c –
Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office will sort out how generative AI should impact patents, and issue guidance. There will be some similar stuff for copyright.
d –
Secretary of Homeland Security “shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks”
e –
The HHS will prioritize grant-making to AI initiatives.
f –
Something for the veterans.
g –
Something for climate change
Commentary
Again. I don’t know how fake this is. My guess is not that fake? There will be a bunch of funding for AI stuff, from the public sector, in the next two years.
Most of this seems like random political stuff.
5.3 – Promoting Competition.
Summary
a –
The heads of various departments are supposed to promote competition in AI, including in the inputs to AI (NVIDIA)?
b
The Secretary of Commerce is going to incentivize competition in the semi-conductor industry, via a bunch of methods including
“implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem”
mentorship programs
Increasing the resources available to startups (including datasets)
Increasing the funding to R&D for superconductors
c – The Administrator of the Small Business Administration will support small businesses innovating and commercializing AI
d
Commentary
This is a lot of stuff. I don’t know that any of it will really impact how many major players there are at the frontier of AI in 2 years.
My guess is probably not much. I don’t think the government knows to to create NVIDAs or OpenAIs.
What the government can do is break up monopolies, but they’re not doing that here.
My high level takeaways
Mostly, this executive order doesn’t seem to push for much object-level action. Mostly it orders a bunch of assessments to be done, and reports on those assessments to be written, and then passed up to the president.
My best guess is that this is basically an improvement?
I expect something like the following to happen:
The relevant department heads talk with a bunch of experts.
The write up very epistemically conservative reports in which they say “we’re pretty sure that our current models in early 2024 can’t help with making bioweapons, but we don’t know (and can’t really know) what capabilities future systems will have, and therefore can’t really know what risk they’ll pose.”
The sitting president will then be weighing those unknown levels of national security risks against obvious economic gains and competition with China.
In general, this executive order means that the Executive branch is paying attention. That seems, for now, pretty good.
(Though I do remember in 2015 how excited and optimistic people in the rationality community were about Elon Musk, “paying attention”, and that ended with him founding OpenAI, what many of those folks consider to be the worst thing that anyone had ever done to date. FTX looked like a huge success worthy of pride, until it turned out that it was a damaging and unethical fraud. I’ve become much more circumspect about which things are wins, especially wins of the form “powerful people are paying attention”.)
My guess is that this comment would be much more readable with the central chunk of it in a google doc, or failing that a few levels fewer of indented bullets.
e.g. Take this section.
Thoughts:
Do those numbers add up? It seems like if you’re worried about models that were trained on 10^26 flops in total, you should be worried about much smaller training speed thresholds than 10^20 flops per second? 10^19 flops per second, would allow you to train a 10^26 model in 115 days, e.g. about 4 months. Those standards don’t seem consistent.
What do I think about this overall?
I mean, I guess reporting this stuff to the government is a good stepping stone for more radical action, but it depends on what the government decides to do with the reported info.
The thresholds match those that I’ve seen in strategy documents of people that I respect, so that that seems promising. My understanding is that 10^26 FLOPS is about 1-2 orders of magnitude larger than our current biggest models.
The interest in red-teaming is promising, but again it depends on the implementation details.
I’m very curious about “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
What will concretely happen in the world as a result of “an initiative”? Does that mean allocating funding to orgs doing this kind of work? Does it mean setting up some kind of government agency like NIST to…invent benchmarks?
I find it much more readable as the following prose rather than 5 levels of bullets. Less metacognition tracking the depth.
Thoughts
Do those numbers add up? It seems like if you’re worried about models that were trained on 10^26 flops in total, you should be worried about much smaller training speed thresholds than 10^20 flops per second? 10^19 flops per second, would allow you to train a 10^26 model in 115 days, e.g. about 4 months. Those standards don’t seem consistent.
What do I think about this overall?
I mean, I guess reporting this stuff to the government is a good stepping stone for more radical action, but it depends on what the government decides to do with the reported info.
The thresholds match those that I’ve seen in strategy documents of people that I respect, so that that seems promising. My understanding is that 10^26 FLOPS is about 1-2 orders of magnitude larger than our current biggest models.
The interest in red-teaming is promising, but again it depends on the implementation details.
I’m very curious about “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
What will concretely happen in the world as a result of “an initiative”? Does that mean allocating funding to orgs doing this kind of work? Does it mean setting up some kind of government agency like NIST to…invent benchmarks?
Possibly. I wrote this as personal notes, originally, in full nested list format. Then I spent 20 minutes removing some of the the nested-list-ness in wordpress which was very frustrating. I would definitely have organized it better if wordpress was less frustrating.
I did make a google doc format. Maybe the main lesson is that I should have edited it there.
I spent a few hours reading, and parsing out, sections 4 and 5 of the recent White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
The following are my rough notes on each subsection in those two subsections, summarizing what I understand each to mean, and my personal thoughts.
My high level thoughts are at the bottom.
Section by section
Section 4 – Ensuring the Safety and Security of AI Technology.
4.1
Summary:
The secretary of commerce and NIST are going to develop guidelines and best practices for AI systems.
In particular:
“launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
What does this literally mean? Does this allocate funding towards research to develop these benchmarks? What will concretely happen in the world as a result of this initiative?
It also calls for the establishment of guidelines for conducting red-teaming.
[[quote]]
(ii) Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. These efforts shall include:
(A) coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and
(B) in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order.
Commentary:
I imagine that these standards and guidelines are going to be mostly fake.
Are there real guidelines somewhere in the world? What process leads to real guidelines?
4.2
Summary:
a
Anyone who has or wants to train a foundation model, needs to
Report their training plans and safeguards.
Report who has access to the model weights, and the cybersecurity protecting them
The results of red-teaming on those models, and what they did to meet the safety bars
Anyone with a big enough computing cluster needs to report that they have it.
b
The Secretary of Commerce (and some associated agencies) will make (and continually update) some standards for models and computer clusters that are subject to the above reporting requirements. But for the time being,
Any models that were trained with more than 10^26 flops
Any models that are trained primarily on biology data and trained using greater than 10^23 flops
Any datacenter that connected with greater than 100 gigabits per second
Any datacenter that can train an AI at 10^20 flops
c
I don’t know what this subsection is about. Something about protection cyber security for “United States Infrastructure as a Service” products.
This includes some tracking of when foreigners want to use US AI systems in ways that might pose a cyber-security risk, using standards identical to the ones laid out above.
d
More stuff about IaaS, and verifying the identity of foreigners.
Thoughts:
Do those numbers add up? It seems like if you’re worried about models that were trained on 10^26 flops in total, you should be worried about much smaller training speed thresholds than 10^20 flops per second? 10^19 flops per second, would allow you to train a 10^26 model in 115 days, e.g. about 4 months. Those standards don’t seem consistent.
What do I think about this overall?
I mean, I guess reporting this stuff to the government is a good stepping stone for more radical action, but it depends on what the government decides to do with the reported info.
The thresholds match those that I’ve seen in strategy documents of people that I respect, so that that seems promising. My understanding is that 10^26 FLOPS is about 1-2 orders of magnitude larger than our current biggest models.
The interest in red-teaming is promising, but again it depends on the implementation details.
I’m very curious about “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
What will concretely happen in the world as a result of “an initiative”? Does that mean allocating funding to orgs doing this kind of work? Does it mean setting up some kind of government agency like NIST to…invent benchmarks?
4.3
Summary:
They want to protect against AI cyber-security attacks. Mostly this entails government agencies issuing reports.
a – Some actions aimed at protecting “critical infrastructure” (whatever that means).
Heads of major agencies need to provide an annual report to the Secretary of Homeland security on potential ways that AIs open vulnerabilities to critical infrastructure in their purview.
“…The Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.”
Government orgs will incorporate some new guidelines.
The secretary of homeland security will work with government agencies to mandate guidelines.
Homeland security will make an advisory committee to “provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.”
b – Using AI to improve cybersecurity
One piece that is interesting in that: “the Secretary of Defense and the Secretary of Homeland Security shall…each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks”, and then report on their results
Commentary
This is mostly about issuing reports, and guidelines. I have little idea if any of that is real or if this is just an expansion of lost-purpose bureaucracy. My guess is that there will be few people in the systems that have inside views that allow them to write good guidelines for their domains of responsibility regarding AI, and mostly these reports will be epistemically conservative and defensible, with a lot of “X is possibly a risk” where the authors have large uncertainty about how large the risk is.
Trying to use AI to improve cyber security sure is interesting. I hope that they can pull that off. It seems like one of the things that ~ needs to happen for the world to end up in a good equilibrium is for computer security to get a lot better. Otherwise anyone developing a powerful model will have the weights stolen, and there’s a really vulnerable vector of attack for not-even-very-capable AI systems. I think the best hope for that is using our AI systems to shore up computer security defense, and hope that at higher-than-human levels of competence, cyber warfare is not so offense-dominant. (As an example, someone suggested maybe using AI to write a secure successor to C, and the using AI to “swap out” the lower layers of our computing stacks with that more secure low level language.)
Could that possibly happen in government? I generally expect that private companies would be way more competent at this kind of technical research, but maybe the NSA is a notable and important exception? If they’re able to stay ten years ahead in cryptography, maybe they can stay 10 years ahead in AI cyberdefense.
This raises the question, what advantage allows the NSA to stay 10 years ahead? I assume that it is a combination of being able to recruit top talent, and that there are things that they are allowed to do that would be illegal for anyone else. But I don’t actually know if that’s true.
4.4 – For reducing AI-mediated CHEMICAL, BIOLOGICAL, RADIOLOGICAL, AND NUCLEAR threats, focusing on biological weapons in particular.
Summary:
a
The Secretary of Homeland Security (with help from other executive departments) will “evaluate” the potential of AI to both increase and to defend against these threats. This entails talking with experts and then submitting a report to the president.
In particular, it orders the Secretary of Defense (with the help of some other governmental agencies) to conduct a study that “assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks”, evaluates the risks associated with the biology datasets used to train such systems, assesses ways to use AI to reduces biosecurity risks.
b – Specifically to reduce risks from synthetic DNA and RNA.
The office of science and technology policy (with the help of other executive departments) are going to develop a “framework” for synthetic DNA/RNA companies to “implement procurement and screening mechanisms”. This entails developing “criteria and mechanisms” for identifying dangerous nucleotide sequences, and establishing mechanism for doing at-scale screening of synthetic nucleotides.
Once such a framework is in place, all (government?) funding agencies that fund life science research will make compliance with that framework a condition of funding.
All of this, once set up, needs to be evaluated and stress tested, and then a report sent to the relevant agencies.
Commentary:
The part about setting up a framework for mandatory screening of nucleotide sequences, seems non-fake. Or at least it is doing more than commissioning assessments and reports.
And it seems like a great idea to me! Even aside from AI concerns, my understanding is that the manufacture synthetic DNA is one major vector of biorisk. If you can effectively identify dangerous nucleotide sequences (and that is the part that seems most suspicious to me), this is one of the few obvious places to enforce strong legal requirements. These are not (yet) legal requirements, but making this a condition of funding seems like a great step.
4.5
Summary
Aims to increase the general ability for identifying AI generated content, and mark all Federal AI generated content as such.
a
The secretary of commerce will produce a report on the current and likely-future methods for, authenticating non-AI content, identifying AI content, watermarking AI content, preventing AI systems from “producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual)”
b
Using that report, the Secretary of Commerce will develop guidelines for detecting and authenticating AI content.
c
Those guidelines will be issued to relevant federal agencies
d
Possibly those guidelines will be folded into the Federal Acquisitions Regulation (whatever that is)
Commentary
Seems generally good to be able to distinguish between AI generated material and non-AI generated material. I’m not sure if this process will turn up anything real that meaningfully impacts anyone’s experience of communications from the government.
4.6
Summary
The Secretary of Commerce is responsible for running a “consultation process on potential risks, benefits, other implications” of open source foundation models, and then for submitting a report to the president on the results.
Commentary
More assessments and reports.
This does tell me that someone in the executive department has gotten the memo that open source models mean that it is easy to remove the safeguards that companies try to put in them.
4.7
Summary
Some stuff about federal data that might be used to train AI Systems. It seems like they want to restrict the data that might enable CBRN weapons or cyberattacks, but otherwise make the data public?
Commentary
I think I don’t care very much about this?
4.8
Summary
This orders a National Security Memorandum on AI to be submitted to the president. This memorandum is supposed to “provide guidance to the Department of Defense, other relevant agencies”
Commentary:
I don’t think that I care about this?
Section 5 – Promoting Innovation and Competition.
5.1 – Attracting AI Talent to the United States.
Summary
This looks like a bunch of stuff to make it easier for foreign workers with AI relevant expertise to get visas, and to otherwise make it easy for them to come to, live in, work in, and stay in, the US.
Commentary
I don’t know the sign of this.
Do we want AI talent to be concentrated in one country?
On the one hand that seems like it accelerates timelines some, especially if there are 99.9% top tier AI researchers that wouldn’t otherwise be able to get visas, but who can now work at OpenAI. (It would surprise me if this is the case? Those people should all be able to get O1 visas, right?)
On the other hand, the more AI talent is concentrated in one country the smaller jurisdiction of the regulatory regime that slows down AI. If enough of the AI talent is in the US, regulations that slow down AI development in the US only have a substantial impact, at least the the short term, before that talent moves, but maybe also in the long term, if researchers care more about continuing to live in the US than they do about making cutting edge AI progress.
5.2
Summary
a –
The director of the NSF will do a bunch of things to spur AI research.
…”launch a pilot program implementing the National AI Research Resource (NAIRR)”. This is evidently something that is intended to boost AI research, but I’m not clear on what it is or what it does.
…”fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs.”
…”establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order.”
b –
The Secretary of Energy will make a pilot program for training AI scientists.
c –
Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office will sort out how generative AI should impact patents, and issue guidance. There will be some similar stuff for copyright.
d –
Secretary of Homeland Security “shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks”
e –
The HHS will prioritize grant-making to AI initiatives.
f –
Something for the veterans.
g –
Something for climate change
Commentary
Again. I don’t know how fake this is. My guess is not that fake? There will be a bunch of funding for AI stuff, from the public sector, in the next two years.
Most of this seems like random political stuff.
5.3 – Promoting Competition.
Summary
a –
The heads of various departments are supposed to promote competition in AI, including in the inputs to AI (NVIDIA)?
b
The Secretary of Commerce is going to incentivize competition in the semi-conductor industry, via a bunch of methods including
“implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem”
mentorship programs
Increasing the resources available to startups (including datasets)
Increasing the funding to R&D for superconductors
c – The Administrator of the Small Business Administration will support small businesses innovating and commercializing AI
d
Commentary
This is a lot of stuff. I don’t know that any of it will really impact how many major players there are at the frontier of AI in 2 years.
My guess is probably not much. I don’t think the government knows to to create NVIDAs or OpenAIs.
What the government can do is break up monopolies, but they’re not doing that here.
My high level takeaways
Mostly, this executive order doesn’t seem to push for much object-level action. Mostly it orders a bunch of assessments to be done, and reports on those assessments to be written, and then passed up to the president.
My best guess is that this is basically an improvement?
I expect something like the following to happen:
The relevant department heads talk with a bunch of experts.
The write up very epistemically conservative reports in which they say “we’re pretty sure that our current models in early 2024 can’t help with making bioweapons, but we don’t know (and can’t really know) what capabilities future systems will have, and therefore can’t really know what risk they’ll pose.”
The sitting president will then be weighing those unknown levels of national security risks against obvious economic gains and competition with China.
In general, this executive order means that the Executive branch is paying attention. That seems, for now, pretty good.
(Though I do remember in 2015 how excited and optimistic people in the rationality community were about Elon Musk, “paying attention”, and that ended with him founding OpenAI, what many of those folks consider to be the worst thing that anyone had ever done to date. FTX looked like a huge success worthy of pride, until it turned out that it was a damaging and unethical fraud. I’ve become much more circumspect about which things are wins, especially wins of the form “powerful people are paying attention”.)
My guess is that this comment would be much more readable with the central chunk of it in a google doc, or failing that a few levels fewer of indented bullets.
e.g. Take this section.
I find it much more readable as the following prose rather than 5 levels of bullets. Less metacognition tracking the depth.
Possibly. I wrote this as personal notes, originally, in full nested list format. Then I spent 20 minutes removing some of the the nested-list-ness in wordpress which was very frustrating. I would definitely have organized it better if wordpress was less frustrating.
I did make a google doc format. Maybe the main lesson is that I should have edited it there.