AI labs’ statements on governance

June 6, 2024: THIS POST HAS BEEN SUCCEEDED BY Companies’ policy advocacy. READ THAT INSTEAD OF THIS. THIS POST WILL NOT BE MAINTAINED.

This is a collection of statements on government policy, regulation, and standards from leading AI labs and their leadership.

As of 7 August 2023, I believe this post has all of the relevant announcements/​blogposts from the three labs it covers, but I expect it is missing a couple relevant speeches/​interviews with lab leadership.[1] Suggestions are welcome.

My quotes tend to focus on AI safety rather than other governance goals.

Within sections, sources are roughly sorted by priority.

OpenAI

Governance of superintelligence (May 2023)

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.

Planning for AGI and beyond (Feb 2023)

We think it’s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale.

Altman Senate testimony (May 2023)

Written testimony (before the hearing):

There are several areas I would like to flag where I believe that AI companies and governments can partner productively.

First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.

Second, AI is a complex and rapidly evolving field. It is essential that the safety requirements that AI companies must meet have a governance regime flexible enough to adapt to new technical developments. The U.S. government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration.

Third, we are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting.

Questions for the Record (after the hearing):

What are the most important factors for Congress to consider when crafting legislation to regulate artificial intelligence? . . . What specific guardrails and/​or regulations do you support that would allow society to benefit from advances in artificial intelligence while minimizing potential risks? [Altman gave identical answers to these two questions]

Any new laws related to AI will become part of a complex legal and policy landscape. A wide range of existing laws already apply to AI, including to our products. And in sectors like medicine, education, and employment, policy stakeholders have already begun to adapt existing laws to take account of the ways that AI impacts those fields. We look forward to contributing to the development of a balanced approach that addresses the risks from AI while also enabling Americans and people around the world to benefit from this technology.

We strongly support efforts to harmonize the emergent accountability expectations for AI, including the efforts of the NIST AI Risk Management Framework, the U.S.-E.U. Trade and Technology Council, and a range of other global initiatives. While these efforts continue to progress, and even before new laws are fully implemented, we see a role for ourselves and other companies to make voluntary commitments on issues such as pre-deployment testing, content provenance, and trust and safety.

We are already doing significant work on responsible and safe approaches to developing and deploying our models, including through red-teaming and quantitative evaluation of potentially dangerous model capabilities and risks. We report on these efforts primarily through a published document that we currently call a System Card. We are refining these approaches in tandem with the broader public policy discussion.

For future generations of the most highly capable foundation models, which are likely to prove more capable than models that have been previously shown to be safe, we support the development of registration, disclosure, and licensing requirements. Such disclosure could help provide policymakers with the necessary visibility to design effective regulatory solutions, and get ahead of trends at the frontier of AI progress. To be beneficial and not create new risks, it is crucial that any such regimes prioritize the security of the information disclosed. Licensure is common in safety-critical and other high-risk contexts, such as air travel, power generation, drug manufacturing, and banking. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards.

. . .

During the hearing, you testified that “a new framework” is necessary for imposing liability for harms caused by artificial intelligence—separate from Section 230 of the Communications Decency Act—and offered to “work together” to develop this framework. What features do you consider most important for a liability framework for artificial Intelligence?

Any new framework should apportion responsibility in such a way that AI services, companies who build on AI services, and users themselves appropriately share responsibility for the choices that they each control and can make, and have appropriate incentives to take steps to avoid harm.

OpenAI disallows the use of our models and tools for certain activities and content, as outlined in our usage policies. These policies are designed to prohibit the use of our models and tools in ways that may cause individual or societal harm. We update these policies in response to new risks and updated information about how our models are being used. Access to and use of our models are also subject to OpenAI’s Terms of Use which, among other things, prohibit the use of our services to harm people’s rights, and prohibit presenting output from our services as being human-generated when it was not.

One important consideration for any liability framework is the level of discretion that should be granted to companies like OpenAI, and people who develop services using these technologies, in determining the level of freedom granted to users. If liability frameworks are overly restrictive, the capabilities that are offered to users could in turn be heavily censored or restricted, leading to potentially stifling outcomes and negative implications for many of the beneficial capabilities of AI, including free speech and education. However, if liability frameworks are too lax, negative externalities may appear where a company benefits from lack of oversight and regulation at the expense of the overall good of society. One of the critical features of any liability framework is to attempt to find and continually refine this balance.

Given these realities, it would be helpful for an assignment of rights and responsibilities related to harms to recognize that the results of AI systems are not solely determined by these systems, but instead respond to human-driven commands. For example, a framework should take into account the degree to which each actor in the chain of events that resulted in the harm took deliberate actions, such as whether a developer clearly stipulated allowed/​disallowed usages or developed reasonable safeguards, and whether a user disregarded usage rules or acted to overcome such safeguards.

AI services should also be encouraged to ensure a baseline of safety and risk disclosures for our products to minimize potential harm. This thinking underlies our approach of putting our systems through safety training and testing prior to release, frank disclosures of risk and mitigations, and enforcement against misuse. Care should be taken to ensure that liability frameworks do not inadvertently create unintended incentives for AI providers to reduce the scope or visibility of such disclosures.

Furthermore, many of the highest-impact uses of new AI tools are likely to take place in specific sectors that are already covered by sector-specific laws and regulations, such as health, financial services and education. Any new liability regime should take into consideration the extent to which existing frameworks could be applied to AI technologies as an interpretive matter. To the extent new or additional rules are needed, they would need to be harmonized with these existing laws.

Hearing transcript:

[Blumenthal asked Altman “the effect on jobs . . . is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is, and whether you share that concern.” His reply only mentioned jobs. Marcus noted that “Sam’s worst fear I do not think is employment. And he never told us what his worst fear actually is. And I think it’s germane to find out.” Altman vaguely replied about “significant harm to the world.”]

. . .

I think the US should lead here and do things first, but to be effective we do need something global. . . . There is precedent—I know it sounds naive to call for something like this, and it sounds really hard—there is precedent. We’ve done it before with the IAEA. We’ve talked about doing it for other technologies. Given what it takes to make these models—the chip supply chain, the limited number of competitive GPUs, the power the US has over these companies—I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world.

. . .

Do you agree with me that the simplest way and the most effective way [to implement licensing of AI tools] is to have an agency that is more nimble and smarter than Congress . . . [overseeing] what you do?

We’d be enthusiastic about that.

. . .

I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying. . . . Please tell me in plain English, two or three reforms, regulations, if any, that you would, you would implement if you were queen or king for a day.

Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards . . . as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn’t compliance with these stated safety thresholds and these percentages of performance on question X or Y.

. . .

I’m a believer in defense in depth. I think that there should be limits on what a deployed model is capable of, and then what it actually does too.

. . .

Would you pause any further development for six months or longer?

So first of all, after we finished training GPT-4, we waited more than six months to deploy it. We are not currently training what will be GPT-5. We don’t have plans to do it in the next six months. But I think the frame of the letter is wrong. What matters is audits, red teaming, safety standards that a model needs to pass before training. If we pause for six months, then I’m not really sure what we do then—do we pause for another six? Do we kind of come up with some rules then? The standards that we have developed and that we’ve used for GPT-4 deployment, we want want to build on those, but we think that’s the right direction, not a calendar clock pause. There may be times—I expect there will be times—when we find something that we don’t understand and we really do need to take a pause, but we don’t see that yet. Nevermind all the benefits.

You don’t see what yet? You’re comfortable with all of the potential ramifications from the current existing technology?

I’m sorry. We don’t see the reasons to not train a new one. For deploying, as I mentioned, I think there’s all sorts of risky behavior and there’s limits we put, we have to pull things back sometimes, add new ones. I meant we don’t see something that would stop us from training the next model, where we’d be so worried that we’d create something dangerous even in that process, let alone the deployment that would happen.

NTIA comment (Jun 2023)

OpenAI’s Current Approaches

We are refining our practices in tandem with the evolving broader public conversation. Here we provide details on several aspects of our approach.

System Cards

Transparency is an important element of building accountable AI systems. A key part of our approach to accountability is publishing a document that we currently call a System Card, for new AI systems that we deploy. Our approach draws inspiration from previous research work on model cards and system cards. To date, OpenAI has published two system cards: the GPT-4 System Card and DALL-E 2 System Card.

We believe that in most cases, it is important for these documents to analyze and describe the impacts of a system – rather than focusing solely on the model itself – because a system’s impacts depend in part on factors other than the model, including use case, context, and real world interactions. Likewise, an AI system’s impacts depend on risk mitigations such as use policies, access controls, and monitoring for abuse. We believe it is reasonable for external stakeholders to expect information on these topics, and to have the opportunity to understand our approach.

Our System Cards aim to inform readers about key factors impacting the system’s behavior, especially in areas pertinent for responsible usage. We have found that the value of System Cards and similar documents stems not only from the overview of model performance issues they provide, but also from the illustrative examples they offer. Such examples can give users and developers a more grounded understanding of the described system’s performance and risks, and of the steps we take to mitigate those risks. Preparation of these documents also helps shape our internal practices, and illustrates those practices for others seeking ways to operationalize responsible approaches to AI.

Qualitative Model Evaluations via Red Teaming

Red teaming is the process of qualitatively testing our models and systems in a variety of domains to create a more holistic view of the safety profile of our models. We conduct red-teaming internally with our own staff as part of model development, as well as with people who operate independently of the team that builds the system being tested. In addition to probing our organization’s capabilities and resilience to attacks, red teams also use stress testing and boundary testing methods, which focus on surfacing edge cases and other potential failure modes with potential to cause harm.

Red teaming is complementary to automated, quantitative evaluations of model capabilities and risks that we also conduct, which we describe in the next section. It can shed light on risks that are not yet quantifiable, or those for which more standardized evaluations have not yet been developed. Our prior work on red teaming is described in the DALL-E 2 System Card and the GPT-4 System Card.

Our red teaming and testing is generally conducted during the development phase of a new model or system. Separately from our own internal testing, we recruit testers outside of OpenAI and provide them with early access to a system that is under development. Testers are selected by OpenAI based on prior work in the domains of interest (research or practical expertise), and have tended to be a combination of academic researchers and industry professionals (e.g, people with work experience in Trust & Safety settings). We evaluate and validate results of these tests, and take steps to make adjustments and deploy mitigations where appropriate.

OpenAI continues to take steps to improve the quality, diversity, and experience of external testers for ongoing and future assessments.

Quantitative Model Evaluations

In addition to the qualitative red teaming described above, we create automated, quantitative evaluations for various capabilities and safety oriented risks, including risks that we find via methods like red teaming. These evaluations allow us to compare different versions of our models with each other, iterate on research methodologies that improve safety, and ultimately act as an input into decision-making about which model versions we choose to deploy. Existing evaluations span topics such as erotic content, hateful content, and content related to self-harm among others, and measure the propensity of the models to generate such content.

Usage Policies

OpenAI disallows the use of our models and tools for certain activities and content, as outlined in our usage policies. These policies are designed to prohibit the use of our models and tools in ways that cause individual or societal harm. We update these policies in response to new risks and updated information about how our models are being used. Access to and use of our models are also subject to OpenAI’s Terms of Use which, among other things, prohibit the use of our services to harm people’s rights, and prohibit presenting output from our services as being human-generated when it was not.

We take steps to limit the use of our models for harmful activities by teaching models to refuse to respond to certain types of requests that may lead to potentially harmful responses. In addition, we use a mix of reviewers and automated systems to identify and take action against misuse of our models. Our automated systems include a suite of machine learning and rule-based classifier detections designed to identify content that might violate our policies. When a user repeatedly prompts our models with policy-violating content, we take actions such as issuing a warning, temporarily suspending the user, or in severe cases, banning the user.

Open Challenges in AI Accountability

As discussed in the RFC, there are many important questions related to AI Accountability that are not yet resolved. In the sections that follow, we provide additional perspective on several of these questions.

Assessing Potentially Dangerous Capabilities

Highly capable foundation models have both beneficial capabilities, as well as the potential to cause harm. As the capabilities of these models get more advanced, so do the scale and severity of the risks they may pose, particularly if under direction from a malicious actor or if the model is not properly aligned with human values.

Rigorously measuring advances in potentially dangerous capabilities is essential for effectively assessing and managing risk. We are addressing this by exploring and building evaluations for potentially dangerous capabilities that range from simple, scalable, and automated tools to bespoke, intensive evaluations performed by human experts. We are collaborating with academic and industry experts, and ultimately aim to contribute to the development of a diverse suite of evaluations that can contribute to the formation of best practices for assessing emerging risks in highly capable foundation models. We believe dangerous capability evaluations are an increasingly important building block for accountability and governance in frontier AI development.

Open Questions About Independent Assessments

Independent assessments of models and systems, including by third parties, may be increasingly valuable as model capabilities continue to increase. Such assessments can strengthen accountability and transparency about the behaviors and risks of AI systems.

Some forms of assessment can occur within a single organization, such as when a team assesses its own work or when a team or part of the organization produces a model and another team or part, acting independently, tests that model. A different approach is to have an external third party conduct an assessment. As described above, we currently rely on a mixture of internal and external evaluations of our models.

Third-party assessments may focus on specific deployments, a model or system at some moment in time, organizational governance and risk management practices, specific applications of a model or system, or some combination thereof. The thinking and potential frameworks to be used in such assessments continue to evolve rapidly, and we are monitoring and considering our own approach to assessments.

For any third-party assessment, the process of selecting auditors/​assessors with appropriate expertise and incentive structures would benefit from further clarity. In addition, selecting the appropriate expectations against which to assess organizations or models is an open area of exploration that will require inputs from different stakeholders. Finally, it will be important for assessments to consider how systems might evolve over time and build that into the process of an assessment /​ audit.

Registration and Licensing for Highly Capable Foundation Models

We support the development of registration and licensing requirements for future generations of the most highly capable foundation models. Such models may have sufficiently dangerous capabilities to pose significant risks to public safety; if they do, we believe they should be subject to commensurate accountability requirements.

It could be appropriate to consider disclosure and registration expectations for training processes that are expected to produce highly capable foundation models. Such disclosure could help enable policymakers with the necessary visibility to design effective regulatory solutions, and get ahead of trends at the frontier of AI progress. It is crucial that any such regimes prioritize the security of the information disclosed.

AI developers could be required to receive a license to create highly capable foundation models which are likely to prove more capable than models previously shown to be safe. Licensure is common in safety-critical and other high-risk contexts, such as air travel, power generation, drug manufacturing, and banking. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards; indeed, many of the accountability practices that the NTIA will be considering could be appropriate licensure requirements. Introducing licensure requirements at the computing provider level could also be a powerful complementary tool for enforcement.

There remain many open questions in the design of registration and licensing mechanisms for achieving accountability at the frontier of AI development. We look forward to collaborating with policymakers in addressing these questions.

Altman interview (Bloomberg, Jun 2023)

At this point, given how much people see the economic benefits and potential, no company could stop it. But global regulation—which I only think should be on these powerful, existential-risk-level systems—global regulation is hard, and you don’t want to overdo it for sure, but I think global regulation can help make it safe, which is a better answer than stopping it, and I also don’t think stopping it would work. . . .

We for example don’t think small startups and open-source models below a certain very high capability threshold should be subject to a lot of regulation. We’ve seen what happens to countries that try to overregulate tech; I don’t think that’s what we want here. But also we think it is super important that as we think about a system that could be at a [high risk level], that we have a global and as coordinated a response as possible. . . .

What do you think about the certification system of AI models that the Biden administration has proposed?

I think there’s some version of that that’s really good. I think that people training models that are way above– any model scale that we have today, but above some certain capability threshold– I think you should need to go through a certification process for that. I think there should be external audits and safety tests.

Frontier AI regulation (Jul 2023)

Note: some authors are affiliated with OpenAI, including Jade Leung and Miles Brundage, two governance leads. Some authors are affiliated with Google DeepMind. This paper is listed under OpenAI since OpenAI includes it on their Research page. It’s not clear how much OpenAI endorses it.

Self-regulation is unlikely to provide sufficient protection against the risks from frontier AI models: government intervention will be needed. We explore options for such intervention. These include:

  • Mechanisms to create and update safety standards for responsible frontier AI development and deployment. These should be developed via multi-stakeholder processes, and could include standards relevant to foundation models overall, not exclusive to frontier AI. These processes should facilitate rapid iteration to keep pace with the technology.

  • Mechanisms to give regulators visibility into frontier AI development, such as disclosure regimes, monitoring processes, and whistleblower protections. These equip regulators with the information needed to address the appropriate regulatory targets and design effective tools for governing frontier AI. The information provided would pertain to qualifying frontier AI development processes, models, and applications.

  • Mechanisms to ensure compliance with safety standards. Self-regulatory efforts, such as voluntary certification, may go some way toward ensuring compliance with safety standards by frontier AI model developers. However, this seems likely to be insufficient without government intervention, for example by empowering a supervisory authority to identify and sanction non-compliance; or by licensing the deployment and potentially the development of frontier AI. Designing these regimes to be well-balanced is a difficult challenge; we should be sensitive to the risks of overregulation and stymieing innovation on the one hand, and moving too slowly relative to the pace of AI progress on the other.

Next, we describe an initial set of safety standards that, if adopted, would provide some guardrails on the development and deployment of frontier AI models. Versions of these could also be adopted for current AI models to guard against a range of risks. We suggest that at minimum, safety standards for frontier AI development should include:

  • Conducting thorough risk assessments informed by evaluations of dangerous capabilities and controllability. This would reduce the risk that deployed models possess unknown dangerous capabilities, or behave unpredictably and unreliably.

  • Engaging external experts to apply independent scrutiny to models. External scrutiny of the safety and risk profile of models would both improve assessment rigor and foster accountability to the public interest.

  • Following standardized protocols for how frontier AI models can be deployed based on their assessed risk. The results from risk assessments should determine whether and how the model is deployed, and what safeguards are put in place. This could range from deploying the model without restriction to not deploying it at all. In many cases, an intermediate option—deployment with appropriate safeguards (e.g., more post-training that makes the model more likely to avoid risky instructions)—may be appropriate.

  • Monitoring and responding to new information on model capabilities. The assessed risk of deployed frontier AI models may change over time due to new information, and new post-deployment enhancement techniques. If significant information on model capabilities is discovered post-deployment, risk assessments should be repeated, and deployment safeguards updated.

Going forward, frontier AI models seem likely to warrant safety standards more stringent than those imposed on most other AI models, given the prospective risks they pose. Examples of such standards include: avoiding large jumps in capabilities between model generations; adopting state-of-the-art alignment techniques; and conducting pre-training risk assessments. Such practices are nascent today, and need further development.

Altman interview (NYmag, Mar 2023)

I think the thing that I would like to see happen immediately is just much more insight into what companies like ours are doing, companies that are training above a certain level of capability at a minimum. A thing that I think could happen now is the government should just have insight into the capabilities of our latest stuff, released or not, what our internal audit procedures and external audits we use look like, how we collect our data, how we’re red-teaming these systems, what we expect to happen, which we may be totally wrong about. [“What I mean is government auditors sitting in our buildings.”] We could hit a wall anytime, but our internal road-map documents, when we start a big training run, I think there could be government insight into that. And then if that can start now– I do think good regulation takes a long time to develop. It’s a real process. They can figure out how they want to have oversight. . . .

Those efforts probably do need a new regulatory effort, and I think it needs to be a global regulatory body. And then people who are using AI, like we talked about, as a medical adviser, I think the FDA can give probably very great medical regulation, but they’ll have to update it for the inclusion of AI. But I would say creation of the systems and having something like an IAEA that regulates that is one thing, and then having existing industry regulators still do their regulation [Ed: he was cut off] . . . .

Section 230 doesn’t seem to cover generative AI. Is that a problem?

I think we will need a new law for use of this stuff, and I think the liability will need to have a few different frameworks. If someone is tweaking the models themselves, I think it’s going to have to be the last person who touches it has the liability, and that’s —

But it’s not full immunity that the platform’s getting —

I don’t think we should have full immunity. Now, that said, I understand why you want limits on it, why you do want companies to be able to experiment with this, you want users to be able to get the experience they want, but the idea of no one having any limits for generative AI, for AI in general, that feels super-wrong.

Brockman House testimony (Jun 2018)

Written testimony:

Policy recommendations

1. Measurement. Many other established voices in the field have tried to combat panic about AGI by instead saying it not something to worry about or is unfathomably far off. We recommend neither panic nor a lack of caution. Instead, we recommend investing more resources into understanding where the field is, how quickly progress is accelerating, and what roadblocks might lie ahead. We’re exploring this problem via our own research and support of initiatives like the AI Index. But there’s much work to be done, and we are available to work with governments around the world to support their own measurement and assessment initiatives — for instance, we participated in a GAO-led study on AI last year.

2. Foundation for international coordination. AGI’s impact, like that of the Internet before it, won’t track national boundaries. Successfully using AGI to make the world better for people, while simultaneously preventing rogue actors from abusing it, will require international coordination of some form. Policymakers today should invest in creating the foundations for successful international coordination in AI, and recognize that the more adversarial the climate in which AGI is created, the less likely we are to achieve a good outcome. We think the most practical place to start is actually with the measurement initiatives: each government working on measurement will create teams of people who have a strong motivation to talk to their international counterparts to harmonize measurement schemes and develop global standards.

Brockman Senate testimony (Nov 2016)

Anthropic

Charting a Path to AI Accountability (Jun 2023)

Anthropic’s NTIA comment is a longer version of this blogpost.

There is currently no robust and comprehensive process for evaluating today’s advanced artificial intelligence (AI) systems, let alone the more capable systems of the future. Our submission presents our perspective on the processes and infrastructure needed to ensure AI accountability. Our recommendations consider the NTIA’s potential role as a coordinating body that sets standards in collaboration with other government agencies like the National Institute of Standards and Technology (NIST).

In our recommendations, we focus on accountability mechanisms suitable for highly capable and general-purpose AI models. Specifically, we recommend:

  • Fund research to build better evaluations

    • Increase funding for AI model evaluation research. Developing rigorous, standardized evaluations is difficult and time-consuming work that requires significant resources. Increased funding, especially from government agencies, could help drive progress in this critical area.

    • Require companies in the near-term to disclose evaluation methods and results. Companies deploying AI systems should be mandated to satisfy some disclosure requirements with regard to their evaluations, though these requirements need not be made public if doing so would compromise intellectual property (IP) or confidential information. This transparency could help researchers and policymakers better understand where existing evaluations may be lacking.

    • Develop in the long term a set of industry evaluation standards and best practices. Government agencies like NIST could work to establish standards and benchmarks for evaluating AI models’ capabilities, limitations, and risks that companies would comply with.

  • Create risk-responsive assessments based on model capabilities

    • Develop standard capabilities evaluations for AI systems. Governments should fund and participate in the development of rigorous capability and safety evaluations targeted at critical risks from advanced AI, such as deception and autonomy. These evaluations can provide an evidence-based foundation for proportionate, risk-responsive regulation.

    • Develop a risk threshold through more research and funding into safety evaluations. Once a risk threshold has been established, we can mandate evaluations for all models against this threshold.

      • If a model falls below this risk threshold, existing safety standards are likely sufficient. Verify compliance and deploy.

      • If a model exceeds the risk threshold and safety assessments and mitigations are insufficient, halt deployment, significantly strengthen oversight, and notify regulators. Determine appropriate safeguards before allowing deployment.

  • Establish pre-registration for large AI training runs

    • Establish a process for AI developers to report large training runs ensuring that regulators are aware of potential risks. This involves determining the appropriate recipient, required information, and appropriate cybersecurity, confidentiality, IP, and privacy safeguards.

    • Establish a confidential registry for AI developers conducting large training runs to pre-register model details with their home country’s national government (e.g., model specifications, model type, compute infrastructure, intended training completion date, and safety plans) before training commences. Aggregated registry data should be protected to the highest available standards and specifications.

  • Empower third party auditors that are…

    • Technically literate – at least some auditors will need deep machine learning experience;

    • Security-conscious – well-positioned to protect valuable IP, which could pose a national security threat if stolen; and

    • Flexible – able to conduct robust but lightweight assessments that catch threats without undermining US competitiveness.

  • Mandate external red teaming before model release

    • Mandate external red teaming for AI systems, either through a centralized third party (e.g., NIST) or in a decentralized manner (e.g., via researcher API access) to standardize adversarial testing of AI systems. This should be a precondition for developers who are releasing advanced AI systems.

    • Establish high-quality external red teaming options before they become a precondition for model release. This is critical as red teaming talent currently resides almost exclusively within private AI labs.

  • Advance interpretability research

    • Increase funding for interpretability research. Provide government grants and incentives for interpretability work at universities, nonprofits, and companies. This would allow meaningful work to be done on smaller models, enabling progress outside frontier labs.

    • Recognize that regulations demanding interpretable models would currently be infeasible to meet, but may be possible in the future pending research advances.

  • Enable industry collaboration on AI safety via clarity around antitrust

    • Regulators should issue guidance on permissible AI industry safety coordination given current antitrust laws. Clarifying how private companies can work together in the public interest without violating antitrust laws would mitigate legal uncertainty and advance shared goals.

We believe this set of recommendations will bring us meaningfully closer to establishing an effective framework for AI accountability. Doing so will require collaboration between researchers, AI labs, regulators, auditors, and other stakeholders. Anthropic is committed to supporting efforts to enable the safe development and deployment of AI systems. Evaluations, red teaming, standards, interpretability and other safety research, auditing, and strong cybersecurity practices are all promising avenues for mitigating the risks of AI while realizing its benefits.

We believe that AI could have transformative effects in our lifetime and we want to ensure that these effects are positive. The creation of robust AI accountability and auditing mechanisms will be vital to realizing this goal.

Dario Amodei Senate testimony (Jul 2023)

Written testimony (before the hearing):

I will devote most of this prepared testimony to discussing the risks of AI, including what I believe to be extraordinarily grave threats to US national security over the next 2 to 3 years. . . .

The medium-term risks are where I would most like to draw the subcommittee’s attention. Simply put, a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology. . . .

Policy Recommendations

In our view these concerns merit an urgent policy response. The ideal policy response would address not just the specific risks we’ve identified above, but would at the same time provide a framework for addressing as many other risks as possible – without, of course, hampering innovation more than is necessary. We recommend three broad classes of policies:

  • First, the U.S. must secure the AI supply chain, in order to maintain its lead while keeping these technologies out of the hands of bad actors. This supply chain runs all the way from semiconductor manufacturing equipment to AI models stored on the servers of companies like ours. A number of governments have taken steps in this regard. Specifically, the critical supply chain includes:

    • Semiconductor manufacturing equipment, such as lithography machines.

    • Chips used for training AI systems, such as GPUs.

    • Trained AI systems, which are vulnerable to “export” through cybertheft or uncontrolled release.

      • Companies such as Anthropic and others developing frontier AI systems should have to comply with stringent cybersecurity standards in how they store their AI systems. We have shared with the U.S. government and other labs our views of appropriate cybersecurity best practices, and are moving to implement these practices ourselves.

  • Second, we recommend a “testing and auditing regime” for new and more powerful models. Similar to cars or airplanes, we should consider the AI models of the near future to be powerful machines which possess great utility, but that can be lethal if designed badly or misused. New AI models should have to pass a rigorous battery of safety tests both during development and before being released to the public or to customers.

    • National security risks such as misuse of biology, cybersystems, or radiological materials should have top priority in testing due to the mix of imminence and severity of threat.

    • However, the tests could also cover other concerns such as bias, potential to create misinformation, privacy, child safety, and respect for copyright.

    • Similarly, the tests could measure the capacity for autonomous systems to escape control, beginning to get a handle on the risks of future systems. There are already nonprofit organizations, such as the Alignment Research Center, attempting to develop such tests.

    • It is important that testing and auditing happen at regular checkpoints during the process of training powerful models to identify potentially dangerous capabilities or other risks so that they can be mitigated before training progresses too far.

    • The recent voluntary commitments announced by the White House commit some companies (including Anthropic) to do this type of testing, but legislation could go further by mandating these tests for all models and requiring that they pass according to certain standards before deployment.

    • It is worth stating clearly that given the current difficulty of controlling AI systems even where safety is prioritized, there is a real possibility that these rigorous standards would lead to a substantial slowdown in AI development, and that this may be a necessary outcome. Ideally, however, the standards would catalyze innovation in safety rather than slowing progress, as companies race to become the first company technologically capable of safely deploying tomorrow’s AI systems.

  • Third, we should recognize that the science of testing and auditing for AI systems is in its infancy, and much less developed than it is for airplanes and automobiles. In particular, it is not currently easy to entirely understand what bad behaviors an AI system is capable of, without broadly deploying it to users. Thus, it is important to fund both measurement and research on measurement, to ensure a testing and auditing regime is actually effective.

    • Our suggestion for the agency to oversee this process is NIST, whose mandate focuses explicitly on measurement and evaluation. However many other agencies could also contribute expertise and structure to this work.

    • Anthropic has been a vocal supporter of the proposed National AI Research Resource (NAIRR). The NAIRR could, among other purposes, be used to fund research on measurement, evaluation, and testing, and could do so in the public interest rather than tied to a corporation.

The three directions above are synergistic: responsible supply chain policies help give America enough breathing room to impose rigorous standards on our own companies, without ceding our national lead. Funding measurement in turn makes these rigorous standards meaningful.

In conclusion, it is essential that we mitigate the grave national security risks presented by near-future AI systems, while also maintaining our lead in this critical technology and reaping the benefits of its advancement.

Hearing transcript:

[I haven’t gone through this; see also this.]

[Expand NIST] (Apr 2023)

This is a policy memo; there is also a corresponding blogpost. It follows up on Comment on “Study To Advance a More Productive Tech Economy” below. It also succeeds Clark Senate testimony (Sep 2022).

With this additional resourcing, NIST could continue and expand its work on AI assurance efforts like:

  • Cataloging existing AI evaluations and benchmarks used in industry and academia

  • Investigating the scientific validity of existing evaluations (e.g., adherence to quality control practices, effects of technical implementation choices on evaluation results, etc.)

  • Designing novel evaluations that address limitations of existing evaluations

  • Developing technical standards for how to identify vulnerabilities in open-ended systems

  • Developing disclosure standards to enhance transparency around complex AI systems

  • Partnering with allies on international standards to promote multilateral interoperability

  • Further developing and updating the AI Risk Management Framework

More resourcing will allow NIST to build out much-needed testing environments for today’s generative AI systems.

Frontier Model Security (Jul 2023)

Future advanced AI models have the potential to upend economic and national security affairs within and among nation-states. Given the strategic nature of this technology, frontier AI research and models must be secured to levels far exceeding standard practices for other commercial technologies in order to protect them from theft or misuse.

In the near term, governments and frontier AI labs must be ready to protect advanced models and model weights, and the research that feeds into them. This should include measures such as the development of robust best practices widely diffused among industry, as well as treating the advanced AI sector as something akin to “critical infrastructure” in terms of the level of public-private partnership in securing these models and the companies developing them.

Many of these measures can begin as voluntary arrangements, but in time it may be appropriate to use government procurement or regulatory powers to mandate compliance. . . .

We encourage extending SSDF to encompass model development inside of NIST’s standard-setting process.

In the near term, these two best practices [viz. multi-party authorization and secure model development framework] could be established as procurement requirements applying to AI companies and cloud providers contracting with governments – alongside standard cybersecurity practices that also apply to these companies. As U.S. cloud providers provide the infrastructure that many current frontier model companies use, procurement requirements will have an effect similar to broad market regulation and can work in advance of regulatory requirements.

Comment on “Study To Advance a More Productive Tech Economy” (Feb 2022)

Followed up on by the ‘Expand NIST’ sources.

The past decade of AI development charts a future course of increasingly large, high performing industry models that can be adapted for a wide variety of applications. Without intervention or investment however, we risk a future where AI development and oversight is controlled by a handful of actors, motivated primarily by commercial priorities. To ensure these systems drive a more productive and broadly beneficial economy, we must expand access and representation in their creation and evaluation.

A robust assurance ecosystem would help increase public confidence in AI technology, enable a more competitive R&D environment, and foster a stronger U.S. economy.

The federal government can support this by:

  • Increasing funding for academic researchers to access compute resources through efforts such as the National AI Research Resource (NAIRR) and the University Technology Center Program proposed in the United States Innovation and Competition Act (USICA)

  • Providing financial grants to researchers, especially those currently underrepresented, who are developing assurance indicators in areas such as bias and fairness or novel forms of AI system oversight

  • Prioritizing the development of AI testbeds, centralized datasets, and standardized testing protocols

  • Identifying evaluations created by independent researchers and creating a catalog of validated tests

  • Standardizing the essential components of self-designed evaluations and establishing norms for how evaluation results should be disclosed

Google DeepMind

NTIA comment (Google and Google DeepMind, Jun 2023)

While it is tempting to look for silver-bullet policy solutions, AI raises complex questions that require nuanced answers. It is a 21st century technology that requires a 21st century governance model. We need a multi-layered, multi-stakeholder approach to AI governance. This will include:

  • Industry, civil society, and academic experts developing and sharing best practices and technical standards for responsible AI, including around safety and misinformation issues;

  • A hub-and-spoke model of national regulation; and

  • International coordination among allies and partners, including around geopolitical
    security and competitiveness and alignment on regulatory approaches.

At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a “Department of AI.” AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works beer than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed.

Maximizing the economic opportunity from AI will also require a joint effort across federal, state, and local governments, the private sector, and civil society to equip workers to harness AI-driven tools. AI is likely to generate significant economy-wide benets. At the same time, to mitigate displacement risks, the private sector will need to develop proof-of-concept efforts on skilling, training, and continuing education, while the public sector can help validate and scale these efforts to ensure workers have wrap-around support. Smart deployment of AI coupled with thoughtful policy choices and an adaptive safety net can ensure that AI ultimately leads to higher wages and better living standards.

With respect to U.S. regulation to promote accountability, we urge policymakers to:

  • Promote enabling legislation for AI innovation leadership. Federal policymakers can eliminate legal barriers to AI accountability efforts, including by establishing competition safe harbors for open public-private and cross-industry collaboration on AI safety research, and clarifying the liability for misuse and abuse of AI systems by different users (e.g., researchers, authors, creators of AI systems, implementers, and end users). Policymakers should also consider related legal frameworks that support innovation, such as adopting a uniform national privacy law that protects personal information and an AI model’s incidental use of publicly available information.

  • Support proportionate, risk-based accountability measures. Deployers of high-risk AI systems should provide documentation about their systems and undergo independent risk assessments focused on specific applications.

  • Regulate under a “hub-and-spoke” model rather than creating a new AI regulator. Under this model, regulators across the government would engage a central, coordinating agency with AI expertise, such as NIST, with Oce of Management and Budget (OMB) support, for technical guidance on best practices on AI accountability.

  • Use existing authorities to expedite governance and align AI and traditional rules. Where appropriate, sectoral regulators would provide updates clarifying how existing authorities apply to the use of AI systems, as well as how organizations can demonstrate compliance of an AI system with these existing regulations.

  • Assign to AI deployers the responsibility of assessing the risk of their unique deployments, auditing, and other accountability mechanisms as a result of their unparalleled awareness of their specific uses and related risks of the AI system.

  • Define appropriate accountability metrics and benchmarks, as well as terms that may be ambiguous, to guide compliance. Recognize that many existing systems are imperfect and that even imperfect AI systems may, in some settings, be able to improve service levels, reduce costs, or increase affordability and availability.

  • Consider the tradeoffs between different policy objectives, including efficiency and productivity enhancements, transparency, fairness, privacy, security, and resilience.

  • Design regulation to promote competitiveness, responsible innovation, and broad access to the economic benefits of AI.

  • Require high standards of cybersecurity protections (including access controls) and develop targeted “next-generation” trade control policies.

  • Avoid requiring disclosures that include trade secrets or confidential information (potentially advantaging adversaries) or stymie this innovative sector as it continues to evolve.

  • Prepare the American workforce for AI-driven job transitions and promote opportunities to broadly share AI’s benets.
    Finally, NTIA asks how policymakers can otherwise advance AI accountability. The U.S. government should:

  • Continue building technical and human capacity into the ecosystem to enable effective risk management. The government should deepen investment in fundamental responsible AI research (including bias and human-centered systems design) through federal agency initiatives, research centers, and foundations, as well as by creating and supporting public-private partnerships.

  • Drive international policy alignment, working with allies and partners to develop common approaches that reflect democratic values. Policymakers can support common standards and frameworks that enable interoperability and harmonize global AI governance approaches. This can be done by: (1) enabling trusted data flows across national borders, (2) establishing multinational AI research resources, (3) encouraging the adoption of common approaches to AI regulation and governance and a common lexicon, based on the work of the Organisation for Economic Co-operation and Development (OECD), (4) working within standard-setting bodies such as the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) to establish rules, benchmarks, and governance mechanisms that can serve as a baseline for domestic regulatory approaches and deter regulatory fragmentation, (5) using trade and economic agreements to support the development of consistent and non-discriminatory AI regulations, (6) promoting copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models, while supporting workable opt-outs for websites, and (7) establishing more effective mechanisms for information and best-practice sharing among allies and between the private and the public sectors.

  • Explore updating procurement rules to incentivize AI accountability, and ensure OMB and the Federal Acquisition Regulatory Council are engaged in any such updates. It will be critical for agencies who are further ahead in their development of AI procurement practices to remain coordinated and aligned upon a common baseline to effectively scale responsible governance (e.g., through the NIST AI Risk Management Framework (AI RMF)).

The United States currently leads the world in AI development, and with the right policies that support both trustworthy AI and innovation, the United States can continue to lead and help allies enhance their own competitiveness while aligning around a positive and responsible vision for AI. Centering policies around economic opportunity, promoting responsibility and trust, and furthering our collective security will advance today’s and tomorrow’s AI innovation and unleash benets across society.

Exploring institutions for global AI governance (Jul 2023)

Note: this is a Google DeepMind blogpost about the paper International Institutions for Advanced AI. Some authors of the paper are affiliated with Google DeepMind. One author is affiliated with OpenAI. It’s not clear how much Google DeepMind endorses it.

We explore four complementary institutional models to support global coordination and governance functions:

  • An intergovernmental Commission on Frontier AI could build international consensus on opportunities and risks from advanced AI and how they may be managed. This would increase public awareness and understanding of AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers.

  • An intergovernmental or multi-stakeholder Advanced AI Governance Organisation could help internationalise and align efforts to address global risks from advanced AI systems by setting governance norms and standards and assisting in their implementation. It may also perform compliance monitoring functions for any international governance regime.

  • A Frontier AI Collaborative could promote access to advanced AI as an international public-private partnership. In doing so, it would help underserved societies benefit from cutting-edge AI technology and promote international access to AI technology for safety and governance objectives.

  • An AI Safety Project could bring together leading researchers and engineers, and provide them with access to computation resources and advanced AI models for research into technical mitigations of AI risks. This would promote AI safety research and development by increasing its scale, resourcing, and coordination.

Hassabis interview (Klein, Jul 2023)

If we’re getting to a point where somebody is getting near something like a general intelligence system, is that too powerful a technology to be in private hands? Should this be something that whichever corporate entity gets there first controls? Or do we need something else to govern it?

My personal view is that this is such a big thing in its fullness of time. I think it’s bigger than any one corporation or even one nation. I think it needs international cooperation. I’ve often talked in the past about a CERN-like effort for A.G.I., and I quite like to see something like that as we get closer, maybe in many years from now, to an A.G.I. system, where really careful research is done on the safety side of things, understanding what these systems can do, and maybe testing them in controlled conditions, like simulations or games first, like sandboxes, very robust sandboxes with lots of cybersecurity protection around them. I think that would be a good way forward as we get closer towards human-level A.I. systems.

Stuff besides statements

Labs do some policy advocacy in private. I mostly don’t know what their lobbying is like. It’s probably important.

Open letters related to governance:

  • CAIS: Statement on AI Risk (May 2023)

    • “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    • The CEOs of OpenAI, Anthropic, and Google DeepMind signed.

    • 59 from Google DeepMind, 28 from OpenAI, and 15 from Anthropic signed.

  • FLI: Pause Giant AI Experiments (Mar 2023)

    • It was not signed by the leadership of OpenAI, Anthropic, or DeepMind.

    • It appears to have 8 signatories from DeepMind, 3 from OpenAI, and none from Anthropic. Not all signatures were authenticated.

Labs sometimes do research relevant to governance, which matters directly and gives evidence about their attitudes:

Lab leadership sometimes tweets about their attitudes (very nonexhaustive):

  • Anthropic

    • Jack Clark

      • Tweet (Jun 2023)

        • “if best ideas for AI policy involve depriving people of the ‘means of production’ of AI (e.g H100s), then you don’t have a hugely viable policy . . . . policy which looks like picking winners is basically bad policy, and compute controls (and related ideas like ‘licensing’) have this problem. [And a public option is supposed to help somehow.]”

        • This is right in part but seems largely confused/​bad to me, and it’s not clear how Clark proposes solving the maybe it will be possible to train dangerous models with moderate amounts of hardware problem. But I’m inclined to let him elaborate before passing judgment.

      • Tweet (Jun 2023)

        • “A world where we can push a button and stop larger compute things being built and all focus on safety for a while is good. I’m not sure also the compute control stuff gets you that and there are ways to game it, so need effort on other ideas also. . . . A total frontier ban is fine, it’s just that where you and I probably have different worldviews is in how you make the ban work. If we could wave a wand and guarantee everyone worldwide stops doing stuff at the frontier for a while and redirects to safety, then that’s good.”

      • Tweet (Jul 2023)

        • “New essay: What should the UK’s £100 million Foundation Model Taskforce do? tl;dr: the UK has a unique opportunity to gain policy leverage and improve safety of AI landscape by having FM taskforce eval AI models for misuses and alignment risks. In this highly specific proposal I try to lay out exactly what the FM taskforce should do, list different projects and priorities, and sketch out staffing for such an initiative. My basic position is once you can evaluate AI systems you can gain leverage in policy. Most AI policy is confused or fuzzy because you aren’t able to evaluate an AI system for various properties. This is also why the developers of AI go into all policy conversations with asymmetric information—they know how to eval their own systems for some stuff. If we want a better ‘political economy of AI’ it probably starts with reducing this information asymmetry by having govs and other third-parties develop ability to eval AI systems, ranging from proprietary models to open source ones.”

  • OpenAI

    • Greg Brockman

      • Tweet (Apr 2023)

        • “We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments, be accompanied by increasingly-sophisticated predictions of their capability and impact, and require best practices such as dangerous capability testing. We think governance of large-scale compute usage, safety standards, and regulation of/​lesson-sharing from deployment are good ideas, but the details really matter and should adapt over time as the technology evolves. It’s also important to address the whole spectrum of risks from present-day issues (e.g. preventing misuse or self-harm, mitigating bias) to longer-term existential ones.”

Labs sometimes take actions relevant to governance (not exhaustive):

  • OpenAI and Anthropic work with ARC Evals to check their models for dangerous capabilities before deployment

Other sources (using strikethrough to communicate that this is lower-priority than everything else in this post):

  • Exclusive: OpenAI Lobbied E.U. to Water Down AI Regulation (Time, Jun 2023)

    • This is not clearly bad but tentatively seems slightly bad. Slightly more so since it appears that they avoided talking about this publicly.

    • Maybe Google did something similar: Big Tech Is Already Lobbying to Water Down Europe’s AI Rules (Time, Apr 2023).

    • An interviewer said “the EU is considering labeling ChatGPT high-risk” and Altman replied ‘I have followed the development of the EU’s AI Act, but it changed. It’s obviously still in development. I don’t know enough about the current version of it to say this definition of what high-risk is and this way of classifying it, this is what you have to do. I don’t know if I would say that’s good or bad. I think totally banning this stuff is not the right answer, and I think that not regulating this stuff at all is not the right answer either. And so the question is, is that going to end in the right balance? I think if the EU is saying, “No one in Europe gets to use ChatGPT.” Probably not what I would do, but if the EU is saying, “Here’s the restrictions on ChatGPT and any service like it.” There’s plenty of versions of that I could imagine that are super-sensible.’

  • Google challenges OpenAI’s calls for government AI czar (CNBC, Jun 2023)

    • ‘While OpenAI CEO Sam Altman touted the idea of a new government agency focused on AI to deal with its complexities and license the technology, Google said it preferred a “multi-layered, multi-stakeholder approach to AI governance.”’

    • The best approach is not clear to me.

Other collections & analysis

  1. ^

    Potential sources not [yet added /​ worth adding]:

    Lots of governance papers by governance people at labs. Including some listed on labs’ research pages and probably some with corresponding blogposts:

    - https://​​openai.com/​​research/​​improving-verifiability

    - https://​​openai.com/​​research/​​preparing-for-malicious-uses-of-ai


    Adjacent to statements on governance is statements on AI to policy people. E.g. https://​​jack-clark.net/​​2023/​​07/​​18/​​ai-safety-and-corporate-power-remarks-given-at-the-united-states-security-council/​​.


    Import AI (and the rest of https://​​jack-clark.net (in particular https://​​importai.substack.com/​​p/​​import-ai-337-why-i-am-confused-about and https://​​jack-clark.net/​​2023/​​07/​​05/​​what-should-the-uks-100-million-foundation-model-taskforce-do/​​)) (Clark’s personal capacity, but that’s OK)


    See https://​​twitter.com/​​AnnaCLenhart/​​status/​​1701008114987008186.


    Anthropic tweet


    Stuff from Altman’s world tour in May–Jun 2023


    DeepMind: https://​​www.theguardian.com/​​commentisfree/​​2023/​​aug/​​04/​​ai-companies-regulation-international-inclusive


    Moore’s Law for Everything (Altman 2021) and Sam Altman and Bill Gale on Taxation Solutions for Advanced AI (GovAI 2022)


    OpenAI: Confidence-Building Measures for Artificial Intelligence


    Inflection AI: Suleyman: Tweet (Jul 2023):

    ‘It’s time for meaningful outside scrutiny of the largest AI training runs. The obvious place to start is “Scale & Capabilities Audits” 1./​ There are two ways I see this working. Firstly an industry funded consortium that everyone voluntarily signs up to. In some ways this might be quicker and easier route, but the flaws are also obvious. 2./​ It would almost immediately be accused of capture, and might be tempted to softball the audit process. More robust would be a new government agency of some kind, with a clear mandate to audit every model above certain scale and capability thresholds. 3./​ This would be a big step change, fundamentally at odds with the old skool culture of the tech industry. But it’s the right thing to do and [it’s] time for a culture shift. We in AI should welcome third party audits. 4./​ The critical thing now is to design a sensible system, and agree the benchmarks that will actually offer real oversight, and ensure that oversight is tied to delivering AI that works in the interests of everyone. Let’s get started right away.’

    But note his AI-catastrophe-skepticism elsewhere (citation needed).

    Also https://​​inflection.ai/​​g7-hiroshima-code-of-conduct.

    Also Tweet or what it links to. Also probably lots of more recent tweets and interviews.


    Google:

    - Public Policy Perspectives (Google AI)

    - A Policy Agenda for Responsible Progress in Artificial Intelligence (Google 2023)

    - AI at Google: our principles


    Microsoft: How do we best govern AI? and Governing AI: A Blueprint for the Future (May 2023) (see especially “licensing regime”). Maybe see also https://​​www.microsoft.com/​​en-us/​​ai/​​responsible-ai and https://​​www.microsoft.com/​​cms/​​api/​​am/​​binary/​​RE4pKH5.


    People talk to governments privately—e.g. I should ask Jack Clark if he’s willing to share some of what he says privately?

Crossposted to EA Forum (0 points, 0 comments)
No comments.